Skip to content

onnx

This module implements scalers for ONNX models.

Classes:

  • ArtCNN

    Super-Resolution Convolutional Neural Networks optimised for anime.

  • BaseOnnxScaler

    Abstract generic scaler class for an ONNX model.

  • DPIR

    Deep Plug-and-Play Image Restoration

  • GenericOnnxScaler

    Generic scaler class for an ONNX model.

  • Waifu2x

    Well known Image Super-Resolution for Anime-Style Art.

Functions:

Attributes:

  • BackendLike

    Type alias for anything that can resolve to a Backend from vs-mlrt.

BackendLike module-attribute

BackendLike = backendT | type[backendT] | str

Type alias for anything that can resolve to a Backend from vs-mlrt.

This includes: - A string identifier. - A class type subclassing Backend. - An instance of a Backend.

ArtCNN

ArtCNN(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

Super-Resolution Convolutional Neural Networks optimised for anime.

A quick reminder that vs-mlrt does not ship these in the base package. You will have to grab the extended models pack or get it from the repo itself. (And create an "ArtCNN" folder in your models folder yourself)

https://github.com/Artoriuz/ArtCNN/releases/latest

Defaults to R8F64.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

  • C16F64

    Very fast and good enough for AA purposes but the onnx variant is officially deprecated.

  • C16F64_Chroma

    The bigger of the old chroma models.

  • C16F64_DS

    The same as C16F64 but intended to also denoise and sharpen.

  • C4F16

    This has 4 internal convolution layers with 16 filters each.

  • C4F16_DN

    The same as C4F16 but intended to also denoise. Works well on noisy sources when you don't want any sharpening.

  • C4F16_DS

    The same as C4F16 but intended to also denoise and sharpen.

  • C4F32

    This has 4 internal convolution layers with 32 filters each.

  • C4F32_Chroma

    The smaller of the chroma models.

  • C4F32_DN

    The same as C4F32 but intended to also denoise. Works well on noisy sources when you don't want any sharpening.

  • C4F32_DS

    The same as C4F32 but intended to also denoise and sharpen.

  • R16F96

    The biggest model. Can compete with or outperform Waifu2x Cunet.

  • R16F96_Chroma

    The biggest and fancy chroma model. Shows almost biblical results on the right sources.

  • R8F64

    A smaller and faster version of R16F96 but very competitive.

  • R8F64_Chroma

    The new and fancy big chroma model.

  • R8F64_DS

    The same as R8F64 but intended to also denoise and sharpen.

  • R8F64_JPEG420

    1x RGB model meant to clean JPEG artifacts and to fix chroma subsampling.

  • R8F64_JPEG444

    1x RGB model meant to clean JPEG artifacts.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

C16F64

C16F64(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

Very fast and good enough for AA purposes but the onnx variant is officially deprecated.

This has 16 internal convolution layers with 64 filters each.

ONNX files available at https://github.com/Artoriuz/ArtCNN/tree/388b91797ff2e675fd03065953cc1147d6f972c2/ONNX

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C16F64().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569
570
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

C16F64_Chroma

C16F64_Chroma(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNChroma

The bigger of the old chroma models.

These don't double the input clip and rather just try to enhance the chroma using luma information.

Example usage:

from vsscale import ArtCNN

chroma_upscaled = ArtCNN.C16F64_Chroma().scale(clip)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import flexible_inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    u, v = flexible_inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

    debug(f"{self}: Inferenced clip: {u.format!r}")
    debug(f"{self}: Inferenced clip: {v.format!r}")

    return core.std.ShufflePlanes([clip, u, v], [0, 0, 0], vs.YUV, clip)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
634
635
636
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = norm_expr(clip, "x 0.5 -", [1, 2], func=self.__class__)
    return super().postprocess_clip(clip, input_clip, **kwargs)

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    assert clip.format.color_family == vs.YUV

    if clip.format.subsampling_h != 0 or clip.format.subsampling_w != 0:
        chroma_scaler = Kernel.ensure_obj(kwargs.pop("chroma_scaler", Bilinear))

        format = clip.format.replace(
            subsampling_h=0,
            subsampling_w=0,
            sample_type=vs.FLOAT,
            bits_per_sample=self._pick_precision(16, 32),
        )
        dither_type = DitherType.ORDERED if DitherType.should_dither(clip.format, format) else DitherType.NONE

        debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

        clip = limiter(
            chroma_scaler.resample(clip, **dict[str, Any](format=format, dither_type=dither_type) | kwargs),
            func=self.__class__,
        )

        debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

        return norm_expr(clip, "x 0.5 +", [1, 2], func=self.__class__)

    return norm_expr(super().preprocess_clip(clip, **kwargs), "x 0.5 +", [1, 2], func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

C16F64_DS

C16F64_DS(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

The same as C16F64 but intended to also denoise and sharpen.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C16F64_DS().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569
570
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

C4F16

C4F16(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

This has 4 internal convolution layers with 16 filters each.

The currently fastest variant. Not really recommended for any filtering. Should strictly be used for real-time applications and even then the other non R ones should be fast enough...

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C4F16().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569
570
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

C4F16_DN

C4F16_DN(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

The same as C4F16 but intended to also denoise. Works well on noisy sources when you don't want any sharpening.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C4F16_DN().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569
570
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

C4F16_DS

C4F16_DS(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

The same as C4F16 but intended to also denoise and sharpen.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C4F16_DS().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569
570
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

C4F32

C4F32(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

This has 4 internal convolution layers with 32 filters each.

If you need an even faster model.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C4F32().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569
570
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

C4F32_Chroma

C4F32_Chroma(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNChroma

The smaller of the chroma models.

These don't double the input clip and rather just try to enhance the chroma using luma information.

Example usage:

from vsscale import ArtCNN

chroma_upscaled = ArtCNN.C4F32_Chroma().scale(clip)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import flexible_inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    u, v = flexible_inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

    debug(f"{self}: Inferenced clip: {u.format!r}")
    debug(f"{self}: Inferenced clip: {v.format!r}")

    return core.std.ShufflePlanes([clip, u, v], [0, 0, 0], vs.YUV, clip)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
634
635
636
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = norm_expr(clip, "x 0.5 -", [1, 2], func=self.__class__)
    return super().postprocess_clip(clip, input_clip, **kwargs)

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    assert clip.format.color_family == vs.YUV

    if clip.format.subsampling_h != 0 or clip.format.subsampling_w != 0:
        chroma_scaler = Kernel.ensure_obj(kwargs.pop("chroma_scaler", Bilinear))

        format = clip.format.replace(
            subsampling_h=0,
            subsampling_w=0,
            sample_type=vs.FLOAT,
            bits_per_sample=self._pick_precision(16, 32),
        )
        dither_type = DitherType.ORDERED if DitherType.should_dither(clip.format, format) else DitherType.NONE

        debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

        clip = limiter(
            chroma_scaler.resample(clip, **dict[str, Any](format=format, dither_type=dither_type) | kwargs),
            func=self.__class__,
        )

        debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

        return norm_expr(clip, "x 0.5 +", [1, 2], func=self.__class__)

    return norm_expr(super().preprocess_clip(clip, **kwargs), "x 0.5 +", [1, 2], func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

C4F32_DN

C4F32_DN(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

The same as C4F32 but intended to also denoise. Works well on noisy sources when you don't want any sharpening.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C4F32_DN().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569
570
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

C4F32_DS

C4F32_DS(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

The same as C4F32 but intended to also denoise and sharpen.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C4F32_DS().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569
570
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

R16F96

R16F96(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

The biggest model. Can compete with or outperform Waifu2x Cunet.

Also quite a bit slower but is less heavy on vram.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.R16F96().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569
570
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

R16F96_Chroma

R16F96_Chroma(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNChroma

The biggest and fancy chroma model. Shows almost biblical results on the right sources.

These don't double the input clip and rather just try to enhance the chroma using luma information.

Example usage:

from vsscale import ArtCNN

chroma_upscaled = ArtCNN.R16F96_Chroma().scale(clip)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import flexible_inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    u, v = flexible_inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

    debug(f"{self}: Inferenced clip: {u.format!r}")
    debug(f"{self}: Inferenced clip: {v.format!r}")

    return core.std.ShufflePlanes([clip, u, v], [0, 0, 0], vs.YUV, clip)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
634
635
636
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = norm_expr(clip, "x 0.5 -", [1, 2], func=self.__class__)
    return super().postprocess_clip(clip, input_clip, **kwargs)

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    assert clip.format.color_family == vs.YUV

    if clip.format.subsampling_h != 0 or clip.format.subsampling_w != 0:
        chroma_scaler = Kernel.ensure_obj(kwargs.pop("chroma_scaler", Bilinear))

        format = clip.format.replace(
            subsampling_h=0,
            subsampling_w=0,
            sample_type=vs.FLOAT,
            bits_per_sample=self._pick_precision(16, 32),
        )
        dither_type = DitherType.ORDERED if DitherType.should_dither(clip.format, format) else DitherType.NONE

        debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

        clip = limiter(
            chroma_scaler.resample(clip, **dict[str, Any](format=format, dither_type=dither_type) | kwargs),
            func=self.__class__,
        )

        debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

        return norm_expr(clip, "x 0.5 +", [1, 2], func=self.__class__)

    return norm_expr(super().preprocess_clip(clip, **kwargs), "x 0.5 +", [1, 2], func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

R8F64

R8F64(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

A smaller and faster version of R16F96 but very competitive.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.R8F64().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569
570
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

R8F64_Chroma

R8F64_Chroma(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNChroma

The new and fancy big chroma model.

These don't double the input clip and rather just try to enhance the chroma using luma information.

Example usage:

from vsscale import ArtCNN

chroma_upscaled = ArtCNN.R8F64_Chroma().scale(clip)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import flexible_inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    u, v = flexible_inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

    debug(f"{self}: Inferenced clip: {u.format!r}")
    debug(f"{self}: Inferenced clip: {v.format!r}")

    return core.std.ShufflePlanes([clip, u, v], [0, 0, 0], vs.YUV, clip)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
634
635
636
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = norm_expr(clip, "x 0.5 -", [1, 2], func=self.__class__)
    return super().postprocess_clip(clip, input_clip, **kwargs)

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    assert clip.format.color_family == vs.YUV

    if clip.format.subsampling_h != 0 or clip.format.subsampling_w != 0:
        chroma_scaler = Kernel.ensure_obj(kwargs.pop("chroma_scaler", Bilinear))

        format = clip.format.replace(
            subsampling_h=0,
            subsampling_w=0,
            sample_type=vs.FLOAT,
            bits_per_sample=self._pick_precision(16, 32),
        )
        dither_type = DitherType.ORDERED if DitherType.should_dither(clip.format, format) else DitherType.NONE

        debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

        clip = limiter(
            chroma_scaler.resample(clip, **dict[str, Any](format=format, dither_type=dither_type) | kwargs),
            func=self.__class__,
        )

        debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

        return norm_expr(clip, "x 0.5 +", [1, 2], func=self.__class__)

    return norm_expr(super().preprocess_clip(clip, **kwargs), "x 0.5 +", [1, 2], func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

R8F64_DS

R8F64_DS(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

The same as R8F64 but intended to also denoise and sharpen.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.R8F64_DS().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569
570
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

R8F64_JPEG420

R8F64_JPEG420(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNN, BaseOnnxScalerRGB

1x RGB model meant to clean JPEG artifacts and to fix chroma subsampling.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.R8F64_JPEG420().scale(clip)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486
487
488
489
490
491
492
493
494
495
496
497
498
499
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

R8F64_JPEG444

R8F64_JPEG444(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNN, BaseOnnxScalerRGB

1x RGB model meant to clean JPEG artifacts.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.R8F64_JPEG444().scale(clip)

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486
487
488
489
490
491
492
493
494
495
496
497
498
499
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569
570
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

BaseArtCNN

BaseArtCNN(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseOnnxScaler

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip

    Performs preprocessing on the clip prior to inference.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode

Performs preprocessing on the clip prior to inference.

Source code in vsscale/onnx.py
405
406
407
408
409
410
411
412
413
414
415
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Performs preprocessing on the clip prior to inference.
    """
    debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

    clip = depth(clip, self._pick_precision(16, 32), vs.FLOAT, **kwargs)

    debug(f"{self}.pre: After pp; Clip format is {clip.format!r}")

    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

BaseArtCNNChroma

BaseArtCNNChroma(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNN

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import flexible_inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    u, v = flexible_inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

    debug(f"{self}: Inferenced clip: {u.format!r}")
    debug(f"{self}: Inferenced clip: {v.format!r}")

    return core.std.ShufflePlanes([clip, u, v], [0, 0, 0], vs.YUV, clip)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
634
635
636
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = norm_expr(clip, "x 0.5 -", [1, 2], func=self.__class__)
    return super().postprocess_clip(clip, input_clip, **kwargs)

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    assert clip.format.color_family == vs.YUV

    if clip.format.subsampling_h != 0 or clip.format.subsampling_w != 0:
        chroma_scaler = Kernel.ensure_obj(kwargs.pop("chroma_scaler", Bilinear))

        format = clip.format.replace(
            subsampling_h=0,
            subsampling_w=0,
            sample_type=vs.FLOAT,
            bits_per_sample=self._pick_precision(16, 32),
        )
        dither_type = DitherType.ORDERED if DitherType.should_dither(clip.format, format) else DitherType.NONE

        debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

        clip = limiter(
            chroma_scaler.resample(clip, **dict[str, Any](format=format, dither_type=dither_type) | kwargs),
            func=self.__class__,
        )

        debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

        return norm_expr(clip, "x 0.5 +", [1, 2], func=self.__class__)

    return norm_expr(super().preprocess_clip(clip, **kwargs), "x 0.5 +", [1, 2], func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

BaseArtCNNLuma

BaseArtCNNLuma(
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNN

Initializes the scaler with the specified parameters.

Parameters:

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
def __init__(
    self,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import ArtCNNModel, models_path

    super().__init__(
        (SPath(models_path) / "ArtCNN" / f"{ArtCNNModel(self._model).name}.onnx").to_str(),
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569
570
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

BaseDPIR

BaseDPIR(
    strength: SupportsFloat | VideoNode = 10,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseOnnxScaler

Initializes the scaler with the specified parameters.

Parameters:

  • strength

    (SupportsFloat | VideoNode, default: 10 ) –

    Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

Attributes:

Source code in vsscale/onnx.py
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
def __init__(
    self,
    strength: SupportsFloat | vs.VideoNode = 10,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        strength: Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in
            GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import Backend

    self.strength = strength

    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        16 if overlap is None else overlap,
        8,
        -1,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

    if isinstance(self.backend, Backend.TRT) and not self.backend.force_fp16:
        self.backend.custom_args.extend(["--precisionConstraints=obey", "--layerPrecisions=Conv_123:fp32"])

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

strength instance-attribute

strength = strength

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import DPIRModel, inference, models_path

    # Normalizing the strength clip
    strength_fmt = clip.format.replace(color_family=vs.GRAY)

    if isinstance(self.strength, vs.VideoNode):
        self.strength = norm_expr(self.strength, "x 255 /", format=strength_fmt, func=self.__class__)
    else:
        self.strength = clip.std.BlankClip(format=strength_fmt.id, color=float(self.strength) / 255, keep=True)

    debug(f"{self}: Passing strength clip format: {self.strength.format!r}")

    # Get model name
    self.model = (
        SPath(models_path) / "dpir" / f"{DPIRModel(self._model[clip.format.color_family != vs.GRAY]).name}.onnx"
    ).to_str()

    # Basic inference args
    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    # Padding
    padding = padder.mod_padding(clip, self.multiple, 0)

    if not any(padding) or kwargs.pop("no_pad", False):
        return inference([clip, self.strength], self.model, overlaps, tilesize, self.backend, **kwargs)

    clip = padder.MIRROR(clip, *padding)
    strength = padder.MIRROR(self.strength, *padding)

    return inference([clip, strength], self.model, overlaps, tilesize, self.backend, **kwargs).std.Crop(*padding)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1362
1363
1364
1365
1366
1367
1368
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_color_family(clip) == vs.GRAY:
        return super().preprocess_clip(clip, **kwargs)

    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)

    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    *,
    copy_props: bool = True,
    **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    *,
    copy_props: bool = True,
    **kwargs: Any,
) -> vs.VideoNode:
    assert check_variable_resolution(clip, self.__class__)

    return super().scale(clip, width, height, shift, copy_props=copy_props, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

BaseOnnxScaler

BaseOnnxScaler(
    model: SPathLike | None = None,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    multiple: int = 1,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseGenericScaler, ABC

Abstract generic scaler class for an ONNX model.

Initializes the scaler with the specified parameters.

Parameters:

  • model

    (SPathLike | None, default: None ) –

    Path to the ONNX model file.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • multiple

    (int, default: 1 ) –

    Multiple of the tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip

    Performs preprocessing on the clip prior to inference.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def __init__(
    self,
    model: SPathLike | None = None,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    multiple: int = 1,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        model: Path to the ONNX model file.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        multiple: Multiple of the tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    super().__init__(kernel=kernel, scaler=scaler, shifter=shifter, **kwargs)

    if model is not None:
        self.model = str(SPath(model).resolve())

    fp16 = self.kwargs.pop("fp16", True)
    default_args = {"fp16": fp16, "output_format": int(fp16), "use_cuda_graph": True, "use_cublas": True}

    from vsmlrt import backendT

    if backend is None:
        self.backend = autoselect_backend(**default_args | self.kwargs)
    elif isinstance(backend, type):
        self.backend = backend(**_clean_keywords(default_args | self.kwargs, backend))
    elif isinstance(backend, str):
        backends_map = {b.__name__.lower(): b for b in get_args(backendT)}

        try:
            backend_t = backends_map[backend.lower().strip()]
        except KeyError:
            raise CustomValueError("Unknown backend!", self.__class__, backend)

        self.backend = backend_t(**_clean_keywords(default_args | self.kwargs, backend_t))
    else:
        self.backend = replace(backend, **_clean_keywords(self.kwargs, backend))

    _check_vsmlrt_plugin_version(self.backend.__class__.__name__, self.__class__)

    self.tiles = tiles
    self.tilesize = tilesize
    self.overlap = overlap
    self.multiple = multiple

    if self.overlap is None:
        self.overlap_w = self.overlap_h = 8
    elif isinstance(self.overlap, int):
        self.overlap_w = self.overlap_h = self.overlap
    else:
        self.overlap_w, self.overlap_h = self.overlap

    self.max_instances = max_instances

    if getLogger().level <= DEBUG:
        debug(f"{self}: Using {self.backend.__class__.__name__} backend")

        valid_fields = _get_backend_fields(self.backend)

        for k, v in asdict(self.backend).items():
            debug(f"{self}: {k}={v}, default is {valid_fields[k].default}")

        debug(f"{self}: User tiles: {self.tiles}")
        debug(f"{self}: User tilesize: {self.tilesize}")
        debug(f"{self}: User overlap: {(self.overlap_w, self.overlap_h)}")
        debug(f"{self}: User multiple: {self.multiple}")

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode

Performs preprocessing on the clip prior to inference.

Source code in vsscale/onnx.py
405
406
407
408
409
410
411
412
413
414
415
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Performs preprocessing on the clip prior to inference.
    """
    debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

    clip = depth(clip, self._pick_precision(16, 32), vs.FLOAT, **kwargs)

    debug(f"{self}.pre: After pp; Clip format is {clip.format!r}")

    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

BaseOnnxScalerRGB

BaseOnnxScalerRGB(
    model: SPathLike | None = None,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    multiple: int = 1,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseOnnxScaler

Abstract ONNX class for RGB models.

Initializes the scaler with the specified parameters.

Parameters:

  • model

    (SPathLike | None, default: None ) –

    Path to the ONNX model file.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • multiple

    (int, default: 1 ) –

    Multiple of the tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def __init__(
    self,
    model: SPathLike | None = None,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    multiple: int = 1,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        model: Path to the ONNX model file.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        multiple: Multiple of the tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    super().__init__(kernel=kernel, scaler=scaler, shifter=shifter, **kwargs)

    if model is not None:
        self.model = str(SPath(model).resolve())

    fp16 = self.kwargs.pop("fp16", True)
    default_args = {"fp16": fp16, "output_format": int(fp16), "use_cuda_graph": True, "use_cublas": True}

    from vsmlrt import backendT

    if backend is None:
        self.backend = autoselect_backend(**default_args | self.kwargs)
    elif isinstance(backend, type):
        self.backend = backend(**_clean_keywords(default_args | self.kwargs, backend))
    elif isinstance(backend, str):
        backends_map = {b.__name__.lower(): b for b in get_args(backendT)}

        try:
            backend_t = backends_map[backend.lower().strip()]
        except KeyError:
            raise CustomValueError("Unknown backend!", self.__class__, backend)

        self.backend = backend_t(**_clean_keywords(default_args | self.kwargs, backend_t))
    else:
        self.backend = replace(backend, **_clean_keywords(self.kwargs, backend))

    _check_vsmlrt_plugin_version(self.backend.__class__.__name__, self.__class__)

    self.tiles = tiles
    self.tilesize = tilesize
    self.overlap = overlap
    self.multiple = multiple

    if self.overlap is None:
        self.overlap_w = self.overlap_h = 8
    elif isinstance(self.overlap, int):
        self.overlap_w = self.overlap_h = self.overlap
    else:
        self.overlap_w, self.overlap_h = self.overlap

    self.max_instances = max_instances

    if getLogger().level <= DEBUG:
        debug(f"{self}: Using {self.backend.__class__.__name__} backend")

        valid_fields = _get_backend_fields(self.backend)

        for k, v in asdict(self.backend).items():
            debug(f"{self}: {k}={v}, default is {valid_fields[k].default}")

        debug(f"{self}: User tiles: {self.tiles}")
        debug(f"{self}: User tilesize: {self.tilesize}")
        debug(f"{self}: User overlap: {(self.overlap_w, self.overlap_h)}")
        debug(f"{self}: User multiple: {self.multiple}")

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486
487
488
489
490
491
492
493
494
495
496
497
498
499
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

BaseWaifu2x

BaseWaifu2x(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseOnnxScaler

Initializes the scaler with the specified parameters.

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip

    Performs preprocessing on the clip prior to inference.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        scale: Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
        noise: Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

Noise reduction level

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

Upscaling factor.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs,
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode

Performs preprocessing on the clip prior to inference.

Source code in vsscale/onnx.py
405
406
407
408
409
410
411
412
413
414
415
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Performs preprocessing on the clip prior to inference.
    """
    debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

    clip = depth(clip, self._pick_precision(16, 32), vs.FLOAT, **kwargs)

    debug(f"{self}.pre: After pp; Clip format is {clip.format!r}")

    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

DPIR

DPIR(
    strength: SupportsFloat | VideoNode = 10,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseDPIR

Deep Plug-and-Play Image Restoration

Initializes the scaler with the specified parameters.

Parameters:

  • strength

    (SupportsFloat | VideoNode, default: 10 ) –

    Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

Attributes:

Source code in vsscale/onnx.py
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
def __init__(
    self,
    strength: SupportsFloat | vs.VideoNode = 10,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        strength: Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in
            GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import Backend

    self.strength = strength

    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        16 if overlap is None else overlap,
        8,
        -1,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

    if isinstance(self.backend, Backend.TRT) and not self.backend.force_fp16:
        self.backend.custom_args.extend(["--precisionConstraints=obey", "--layerPrecisions=Conv_123:fp32"])

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

strength instance-attribute

strength = strength

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

DrunetDeblock

DrunetDeblock(
    strength: SupportsFloat | VideoNode = 10,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseDPIR

DPIR model for deblocking.

Initializes the scaler with the specified parameters.

Parameters:

  • strength

    (SupportsFloat | VideoNode, default: 10 ) –

    Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

Attributes:

Source code in vsscale/onnx.py
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
def __init__(
    self,
    strength: SupportsFloat | vs.VideoNode = 10,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        strength: Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in
            GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import Backend

    self.strength = strength

    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        16 if overlap is None else overlap,
        8,
        -1,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

    if isinstance(self.backend, Backend.TRT) and not self.backend.force_fp16:
        self.backend.custom_args.extend(["--precisionConstraints=obey", "--layerPrecisions=Conv_123:fp32"])

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

strength instance-attribute

strength = strength

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import DPIRModel, inference, models_path

    # Normalizing the strength clip
    strength_fmt = clip.format.replace(color_family=vs.GRAY)

    if isinstance(self.strength, vs.VideoNode):
        self.strength = norm_expr(self.strength, "x 255 /", format=strength_fmt, func=self.__class__)
    else:
        self.strength = clip.std.BlankClip(format=strength_fmt.id, color=float(self.strength) / 255, keep=True)

    debug(f"{self}: Passing strength clip format: {self.strength.format!r}")

    # Get model name
    self.model = (
        SPath(models_path) / "dpir" / f"{DPIRModel(self._model[clip.format.color_family != vs.GRAY]).name}.onnx"
    ).to_str()

    # Basic inference args
    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    # Padding
    padding = padder.mod_padding(clip, self.multiple, 0)

    if not any(padding) or kwargs.pop("no_pad", False):
        return inference([clip, self.strength], self.model, overlaps, tilesize, self.backend, **kwargs)

    clip = padder.MIRROR(clip, *padding)
    strength = padder.MIRROR(self.strength, *padding)

    return inference([clip, strength], self.model, overlaps, tilesize, self.backend, **kwargs).std.Crop(*padding)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1362
1363
1364
1365
1366
1367
1368
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_color_family(clip) == vs.GRAY:
        return super().preprocess_clip(clip, **kwargs)

    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)

    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    *,
    copy_props: bool = True,
    **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    *,
    copy_props: bool = True,
    **kwargs: Any,
) -> vs.VideoNode:
    assert check_variable_resolution(clip, self.__class__)

    return super().scale(clip, width, height, shift, copy_props=copy_props, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

DrunetDenoise

DrunetDenoise(
    strength: SupportsFloat | VideoNode = 10,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseDPIR

DPIR model for denoising.

Initializes the scaler with the specified parameters.

Parameters:

  • strength

    (SupportsFloat | VideoNode, default: 10 ) –

    Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

Attributes:

Source code in vsscale/onnx.py
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
def __init__(
    self,
    strength: SupportsFloat | vs.VideoNode = 10,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        strength: Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in
            GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    from vsmlrt import Backend

    self.strength = strength

    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        16 if overlap is None else overlap,
        8,
        -1,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

    if isinstance(self.backend, Backend.TRT) and not self.backend.force_fp16:
        self.backend.custom_args.extend(["--precisionConstraints=obey", "--layerPrecisions=Conv_123:fp32"])

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

strength instance-attribute

strength = strength

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import DPIRModel, inference, models_path

    # Normalizing the strength clip
    strength_fmt = clip.format.replace(color_family=vs.GRAY)

    if isinstance(self.strength, vs.VideoNode):
        self.strength = norm_expr(self.strength, "x 255 /", format=strength_fmt, func=self.__class__)
    else:
        self.strength = clip.std.BlankClip(format=strength_fmt.id, color=float(self.strength) / 255, keep=True)

    debug(f"{self}: Passing strength clip format: {self.strength.format!r}")

    # Get model name
    self.model = (
        SPath(models_path) / "dpir" / f"{DPIRModel(self._model[clip.format.color_family != vs.GRAY]).name}.onnx"
    ).to_str()

    # Basic inference args
    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    # Padding
    padding = padder.mod_padding(clip, self.multiple, 0)

    if not any(padding) or kwargs.pop("no_pad", False):
        return inference([clip, self.strength], self.model, overlaps, tilesize, self.backend, **kwargs)

    clip = padder.MIRROR(clip, *padding)
    strength = padder.MIRROR(self.strength, *padding)

    return inference([clip, strength], self.model, overlaps, tilesize, self.backend, **kwargs).std.Crop(*padding)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1362
1363
1364
1365
1366
1367
1368
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_color_family(clip) == vs.GRAY:
        return super().preprocess_clip(clip, **kwargs)

    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)

    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    *,
    copy_props: bool = True,
    **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    *,
    copy_props: bool = True,
    **kwargs: Any,
) -> vs.VideoNode:
    assert check_variable_resolution(clip, self.__class__)

    return super().scale(clip, width, height, shift, copy_props=copy_props, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import DPIRModel, inference, models_path

    # Normalizing the strength clip
    strength_fmt = clip.format.replace(color_family=vs.GRAY)

    if isinstance(self.strength, vs.VideoNode):
        self.strength = norm_expr(self.strength, "x 255 /", format=strength_fmt, func=self.__class__)
    else:
        self.strength = clip.std.BlankClip(format=strength_fmt.id, color=float(self.strength) / 255, keep=True)

    debug(f"{self}: Passing strength clip format: {self.strength.format!r}")

    # Get model name
    self.model = (
        SPath(models_path) / "dpir" / f"{DPIRModel(self._model[clip.format.color_family != vs.GRAY]).name}.onnx"
    ).to_str()

    # Basic inference args
    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    # Padding
    padding = padder.mod_padding(clip, self.multiple, 0)

    if not any(padding) or kwargs.pop("no_pad", False):
        return inference([clip, self.strength], self.model, overlaps, tilesize, self.backend, **kwargs)

    clip = padder.MIRROR(clip, *padding)
    strength = padder.MIRROR(self.strength, *padding)

    return inference([clip, strength], self.model, overlaps, tilesize, self.backend, **kwargs).std.Crop(*padding)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1362
1363
1364
1365
1366
1367
1368
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_color_family(clip) == vs.GRAY:
        return super().preprocess_clip(clip, **kwargs)

    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)

    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    *,
    copy_props: bool = True,
    **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    *,
    copy_props: bool = True,
    **kwargs: Any,
) -> vs.VideoNode:
    assert check_variable_resolution(clip, self.__class__)

    return super().scale(clip, width, height, shift, copy_props=copy_props, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

GenericOnnxScaler

GenericOnnxScaler(
    model: SPathLike | None = None,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    multiple: int = 1,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseOnnxScaler

Generic scaler class for an ONNX model.

Example usage:

from vsscale import GenericOnnxScaler

scaled = GenericOnnxScaler("path/to/model.onnx").scale(clip, ...)

# For Windows paths:
scaled = GenericOnnxScaler(r"path\to\model.onnx").scale(clip, ...)

Initializes the scaler with the specified parameters.

Parameters:

  • model

    (SPathLike | None, default: None ) –

    Path to the ONNX model file.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • multiple

    (int, default: 1 ) –

    Multiple of the tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip

    Performs preprocessing on the clip prior to inference.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def __init__(
    self,
    model: SPathLike | None = None,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    multiple: int = 1,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        model: Path to the ONNX model file.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        multiple: Multiple of the tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    super().__init__(kernel=kernel, scaler=scaler, shifter=shifter, **kwargs)

    if model is not None:
        self.model = str(SPath(model).resolve())

    fp16 = self.kwargs.pop("fp16", True)
    default_args = {"fp16": fp16, "output_format": int(fp16), "use_cuda_graph": True, "use_cublas": True}

    from vsmlrt import backendT

    if backend is None:
        self.backend = autoselect_backend(**default_args | self.kwargs)
    elif isinstance(backend, type):
        self.backend = backend(**_clean_keywords(default_args | self.kwargs, backend))
    elif isinstance(backend, str):
        backends_map = {b.__name__.lower(): b for b in get_args(backendT)}

        try:
            backend_t = backends_map[backend.lower().strip()]
        except KeyError:
            raise CustomValueError("Unknown backend!", self.__class__, backend)

        self.backend = backend_t(**_clean_keywords(default_args | self.kwargs, backend_t))
    else:
        self.backend = replace(backend, **_clean_keywords(self.kwargs, backend))

    _check_vsmlrt_plugin_version(self.backend.__class__.__name__, self.__class__)

    self.tiles = tiles
    self.tilesize = tilesize
    self.overlap = overlap
    self.multiple = multiple

    if self.overlap is None:
        self.overlap_w = self.overlap_h = 8
    elif isinstance(self.overlap, int):
        self.overlap_w = self.overlap_h = self.overlap
    else:
        self.overlap_w, self.overlap_h = self.overlap

    self.max_instances = max_instances

    if getLogger().level <= DEBUG:
        debug(f"{self}: Using {self.backend.__class__.__name__} backend")

        valid_fields = _get_backend_fields(self.backend)

        for k, v in asdict(self.backend).items():
            debug(f"{self}: {k}={v}, default is {valid_fields[k].default}")

        debug(f"{self}: User tiles: {self.tiles}")
        debug(f"{self}: User tilesize: {self.tilesize}")
        debug(f"{self}: User overlap: {(self.overlap_w, self.overlap_h)}")
        debug(f"{self}: User multiple: {self.multiple}")

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode

Runs inference on the given video clip using the configured model and backend.

Source code in vsscale/onnx.py
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Runs inference on the given video clip using the configured model and backend.
    """

    from vsmlrt import inference

    tilesize, overlaps = self.calc_tilesize(clip)

    debug(f"{self}: Passing clip to inference: {clip.format!r}")
    debug(f"{self}: Passing model: {self.model}")
    debug(f"{self}: Passing tiles size: {tilesize}")
    debug(f"{self}: Passing overlaps: {overlaps}")
    debug(f"{self}: Passing extra kwargs: {kwargs}")

    return inference(clip, self.model, overlaps, tilesize, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode

Performs preprocessing on the clip prior to inference.

Source code in vsscale/onnx.py
405
406
407
408
409
410
411
412
413
414
415
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Performs preprocessing on the clip prior to inference.
    """
    debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

    clip = depth(clip, self._pick_precision(16, 32), vs.FLOAT, **kwargs)

    debug(f"{self}.pre: After pp; Clip format is {clip.format!r}")

    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

Waifu2x

Waifu2x(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: _Waifu2xCunet

Well known Image Super-Resolution for Anime-Style Art.

Defaults to Cunet.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

  • AnimeStyleArt

    Waifu2x model for anime-style art.

  • AnimeStyleArtRGB

    RGB version of the anime-style model.

  • Cunet

    CUNet (Compact U-Net) model for anime art.

  • Photo

    Waifu2x model trained on real-world photographic images.

  • SwinUnetArt

    Swin-Unet-based model trained on anime-style images.

  • SwinUnetArtScan

    Swin-Unet model trained on anime scans.

  • SwinUnetPhoto

    Swin-Unet model trained on photographic content.

  • SwinUnetPhotoV2

    Improved Swin-Unet model for photos (v2).

  • UpConv7AnimeStyleArt

    UpConv7 model variant optimized for anime-style images.

  • UpConv7Photo

    UpConv7 model variant optimized for photographic images.

  • UpResNet10

    UpResNet10 model offering a balance of speed and quality.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        scale: Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
        noise: Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

Noise reduction level

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

Upscaling factor.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

AnimeStyleArt

AnimeStyleArt(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2x

Waifu2x model for anime-style art.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.AnimeStyleArt().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip

    Performs preprocessing on the clip prior to inference.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        scale: Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
        noise: Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

Noise reduction level

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

Upscaling factor.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs,
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode

Handles postprocessing of the model's output after inference.

Source code in vsscale/onnx.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Handles postprocessing of the model's output after inference.
    """
    debug(f"{self}.post: Before pp; Clip format is {clip.format!r}")

    clip = depth(
        clip,
        input_clip,
        dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO,
        **kwargs,
    )

    debug(f"{self}.post: After pp; Clip format is {clip.format!r}")

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode

Performs preprocessing on the clip prior to inference.

Source code in vsscale/onnx.py
405
406
407
408
409
410
411
412
413
414
415
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    """
    Performs preprocessing on the clip prior to inference.
    """
    debug(f"{self}.pre: Before pp; Clip format is {clip.format!r}")

    clip = depth(clip, self._pick_precision(16, 32), vs.FLOAT, **kwargs)

    debug(f"{self}.pre: After pp; Clip format is {clip.format!r}")

    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

AnimeStyleArtRGB

AnimeStyleArtRGB(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2x, BaseOnnxScalerRGB

RGB version of the anime-style model.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.AnimeStyleArtRGB().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        scale: Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
        noise: Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

Noise reduction level

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

Upscaling factor.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs,
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486
487
488
489
490
491
492
493
494
495
496
497
498
499
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

Cunet

Cunet(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: _Waifu2xCunet

CUNet (Compact U-Net) model for anime art.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.Cunet().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        scale: Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
        noise: Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

Noise reduction level

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

Upscaling factor.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    # Cunet model ruins image borders, so we need to pad it before upscale and crop it after.
    if kwargs.pop("no_pad", False):
        return super().inference(clip, **kwargs)

    with padder.ctx(16, 4) as pad:
        padded = pad.MIRROR(clip)
        scaled = super().inference(padded, **kwargs)
        cropped = pad.CROP(scaled)

    return cropped

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    # Cunet model also has a tint issue but it is not constant
    # It leaves flat areas alone but tints detailed areas.
    # Since most people will use Cunet to rescale details, the tint fix is enabled by default.
    if kwargs.pop("no_tint_fix", False):
        return super().postprocess_clip(clip, input_clip, **kwargs)

    tint_fix = norm_expr(
        clip,
        "x 0.5 255 / + 0 1 clamp",
        planes=0 if get_video_format(input_clip).color_family is vs.GRAY else None,
        func="Waifu2x." + self.__class__.__name__,
    )
    return super().postprocess_clip(tint_fix, input_clip, **kwargs)

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

    Additional Notes for the Cunet model:

    • The model can cause artifacts around the image edges. To mitigate this, mirrored padding is applied to the image before inference. This behavior can be disabled by setting inference_no_pad=True.
    • A tint issue is also present but it is not constant. It leaves flat areas alone but tints detailed areas. Since most people will use Cunet to rescale details, the tint fix is enabled by default. This behavior can be disabled with postprocess_no_tint_fix=True

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`,
            and `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to
            the respective method. Use the prefix `inference_` to pass an argument to the inference method.

            Additional Notes for the Cunet model:

               - The model can cause artifacts around the image edges.
               To mitigate this, mirrored padding is applied to the image before inference.
               This behavior can be disabled by setting `inference_no_pad=True`.
               - A tint issue is also present but it is not constant. It leaves flat areas alone but tints
               detailed areas.
               Since most people will use Cunet to rescale details, the tint fix is enabled by default.
               This behavior can be disabled with `postprocess_no_tint_fix=True`

    Returns:
        The scaled clip.
    """
    ...

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

Photo

Photo(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2x, BaseOnnxScalerRGB

Waifu2x model trained on real-world photographic images.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.Photo().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        scale: Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
        noise: Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

Noise reduction level

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

Upscaling factor.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs,
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486
487
488
489
490
491
492
493
494
495
496
497
498
499
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

SwinUnetArt

SwinUnetArt(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2x, BaseOnnxScalerRGB

Swin-Unet-based model trained on anime-style images.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.SwinUnetArt().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        scale: Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
        noise: Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

Noise reduction level

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

Upscaling factor.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs,
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486
487
488
489
490
491
492
493
494
495
496
497
498
499
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

SwinUnetArtScan

SwinUnetArtScan(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2x, BaseOnnxScalerRGB

Swin-Unet model trained on anime scans.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.SwinUnetArtScan().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        scale: Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
        noise: Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

Noise reduction level

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

Upscaling factor.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs,
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486
487
488
489
490
491
492
493
494
495
496
497
498
499
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

SwinUnetPhoto

SwinUnetPhoto(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2x, BaseOnnxScalerRGB

Swin-Unet model trained on photographic content.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.SwinUnetPhoto().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        scale: Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
        noise: Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

Noise reduction level

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

Upscaling factor.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs,
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486
487
488
489
490
491
492
493
494
495
496
497
498
499
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

SwinUnetPhotoV2

SwinUnetPhotoV2(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2x, BaseOnnxScalerRGB

Improved Swin-Unet model for photos (v2).

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.SwinUnetPhotoV2().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        scale: Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
        noise: Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

Noise reduction level

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

Upscaling factor.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs,
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486
487
488
489
490
491
492
493
494
495
496
497
498
499
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

UpConv7AnimeStyleArt

UpConv7AnimeStyleArt(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2x, BaseOnnxScalerRGB

UpConv7 model variant optimized for anime-style images.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.UpConv7AnimeStyleArt().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        scale: Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
        noise: Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

Noise reduction level

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

Upscaling factor.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs,
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486
487
488
489
490
491
492
493
494
495
496
497
498
499
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

UpConv7Photo

UpConv7Photo(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2x, BaseOnnxScalerRGB

UpConv7 model variant optimized for photographic images.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.UpConv7Photo().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        scale: Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
        noise: Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

Noise reduction level

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

Upscaling factor.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs,
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486
487
488
489
490
491
492
493
494
495
496
497
498
499
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

UpResNet10

UpResNet10(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2x, BaseOnnxScalerRGB

UpResNet10 model offering a balance of speed and quality.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.UpResNet10().scale(clip, clip.width * 2, clip.height * 2)

Initializes the scaler with the specified parameters.

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (BackendLike | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • postprocess_clip
  • preprocess_clip
  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vsscale/onnx.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: BackendLike | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any,
) -> None:
    """
    Initializes the scaler with the specified parameters.

    Args:
        scale: Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
        noise: Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
        backend: The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will
            be automatically selected, prioritizing fp16 support.
        tiles: Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the
            model's behavior may vary when they are used.
        tilesize: The size of each tile when splitting the image (if tiles are enabled).
        overlap: The size of overlap between tiles.
        max_instances: Maximum instances to spawn when scaling a variable resolution clip.
        kernel: Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
        scaler: Scaler used for scaling operations. Defaults to kernel.
        shifter: Kernel used for shifting operations. Defaults to kernel.
        **kwargs: Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        overlap,
        1,
        max_instances,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs,
    )

backend instance-attribute

backend = autoselect_backend(**(default_args | kwargs))

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = multiple

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

Noise reduction level

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

pretty_string property

pretty_string: str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

Upscaling factor.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • **kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs,
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486
487
488
489
490
491
492
493
494
495
496
497
498
499
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = (
            dict[str, Any](
                format=input_clip,
                matrix=Matrix.from_video(input_clip, func=self.__class__),
                range=ColorRange.from_video(input_clip, func=self.__class__),
                dither_type=DitherType.ORDERED,
            )
            | kwargs
        )
        clip = self.kernel.resample(clip, **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`, and
            `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to the
            respective method. Use the prefix `inference_` to pass an argument to the inference method.

    Returns:
        The scaled clip.
    """
    from vsmlrt import Backend

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"), (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    debug(f"{self}: Preprocess kwargs: {preprocess_kwargs}")
    debug(f"{self}: Postprocess kwargs: {postprocess_kwargs}")
    debug(f"{self}: Inference kwargs: {inference_kwargs}")

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        debug(f"{self}: Variable resolution clip detected!")

        if not isinstance(self.backend, (Backend.TRT, Backend.TRT_RTX)):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip.from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code in vsscale/onnx.py
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """
    Reimplementation of vsmlrt.calc_tilesize helper function
    """

    from vsmlrt import calc_tilesize

    kwargs = {
        "tiles": self.tiles,
        "tilesize": self.tilesize,
        "width": clip.width,
        "height": clip.height,
        "multiple": self.multiple,
        "overlap_w": self.overlap_w,
        "overlap_h": self.overlap_h,
    } | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

Source code in vskernels/abstract/base.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    Args:
        clip: The source clip.
        shift: Subpixel shift (top, left).
        width: Target width.
        height: Target height.
        **kwargs: Extra parameters to merge.

    Returns:
        Final dictionary of keyword arguments for the scale function.
    """
    return {"width": width, "height": height, "src_top": shift[0], "src_left": shift[1]} | self.kwargs | kwargs

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
443
444
445
446
447
448
449
450
451
452
453
454
@classproperty.cached
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

inference

inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
def inference(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    # Cunet model ruins image borders, so we need to pad it before upscale and crop it after.
    if kwargs.pop("no_pad", False):
        return super().inference(clip, **kwargs)

    with padder.ctx(16, 4) as pad:
        padded = pad.MIRROR(clip)
        scaled = super().inference(padded, **kwargs)
        cropped = pad.CROP(scaled)

    return cropped

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Returns:

  • int

    Kernel radius.

Source code in vskernels/abstract/base.py
406
407
408
409
410
411
412
413
414
415
416
417
@BaseScalerMeta.cachedproperty
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    Raises:
        CustomNotImplementedError: If no kernel radius is defined.

    Returns:
        Kernel radius.
    """
    ...

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    # Cunet model also has a tint issue but it is not constant
    # It leaves flat areas alone but tints detailed areas.
    # Since most people will use Cunet to rescale details, the tint fix is enabled by default.
    if kwargs.pop("no_tint_fix", False):
        return super().postprocess_clip(clip, input_clip, **kwargs)

    tint_fix = norm_expr(
        clip,
        "x 0.5 255 / + 0 1 clamp",
        planes=0 if get_video_format(input_clip).color_family is vs.GRAY else None,
        func="Waifu2x." + self.__class__.__name__,
    )
    return super().postprocess_clip(tint_fix, input_clip, **kwargs)

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482
483
484
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> vs.VideoNode:
    clip = self.kernel.resample(clip, self._pick_precision(vs.RGBH, vs.RGBS), Matrix.RGB, **kwargs)
    return limiter(clip, func=self.__class__)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

    Additional Notes for the Cunet model:

    • The model can cause artifacts around the image edges. To mitigate this, mirrored padding is applied to the image before inference. This behavior can be disabled by setting inference_no_pad=True.
    • A tint issue is also present but it is not constant. It leaves flat areas alone but tints detailed areas. Since most people will use Cunet to rescale details, the tint fix is enabled by default. This behavior can be disabled with postprocess_no_tint_fix=True

Returns:

  • VideoNode

    The scaled clip.

Source code in vsscale/onnx.py
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using the ONNX model.

    Args:
        clip: The input clip to be scaled.
        width: The target width for scaling. If None, the width of the input clip will be used.
        height: The target height for scaling. If None, the height of the input clip will be used.
        shift: A tuple representing the shift values for the x and y axes.
        **kwargs: Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`, `inference`,
            and `_final_scale` methods. Use the prefix `preprocess_` or `postprocess_` to pass an argument to
            the respective method. Use the prefix `inference_` to pass an argument to the inference method.

            Additional Notes for the Cunet model:

               - The model can cause artifacts around the image edges.
               To mitigate this, mirrored padding is applied to the image before inference.
               This behavior can be disabled by setting `inference_no_pad=True`.
               - A tint issue is also present but it is not constant. It leaves flat areas alone but tints
               detailed areas.
               Since most people will use Cunet to rescale details, the tint fix is enabled by default.
               This behavior can be disabled with `postprocess_no_tint_fix=True`

    Returns:
        The scaled clip.
    """
    ...

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

  • VideoNode

    The supersampled clip.

Source code in vskernels/abstract/base.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)

autoselect_backend

autoselect_backend(**kwargs: Any) -> backendT

Try to select the best backend for the current system.

If the system has an NVIDIA GPU: TRT > TRT_RTX > DirectML (D3D12) > NCNN (Vulkan) > CUDA (ORT) > OpenVINO GPU. Else: DirectML (D3D12) > MIGraphX > NCNN (Vulkan) > CPU (ORT) > CPU OpenVINO

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend.

Returns:

  • backendT

    The selected backend.

Source code in vsscale/onnx.py
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
def autoselect_backend(**kwargs: Any) -> Backend:
    """
    Try to select the best backend for the current system.

    If the system has an NVIDIA GPU: TRT > TRT_RTX > DirectML (D3D12) > NCNN (Vulkan) > CUDA (ORT) > OpenVINO GPU.
    Else: DirectML (D3D12) > MIGraphX > NCNN (Vulkan) > CPU (ORT) > CPU OpenVINO

    Args:
        **kwargs: Additional arguments to pass to the backend.

    Returns:
        The selected backend.
    """
    from os import name

    from vsmlrt import Backend

    backend: Any

    if get_nvidia_version():
        if hasattr(core, "trt"):
            backend = Backend.TRT
        elif hasattr(core, "trt_rtx"):
            backend = Backend.TRT_RTX
        elif hasattr(core, "ort") and name == "nt":
            backend = Backend.ORT_DML
        elif hasattr(core, "ncnn"):
            backend = Backend.NCNN_VK
        elif hasattr(core, "ort"):
            backend = Backend.ORT_CUDA
        else:
            backend = Backend.OV_GPU
    else:
        if hasattr(core, "ort") and name == "nt":
            backend = Backend.ORT_DML
        elif hasattr(core, "migx"):
            backend = Backend.MIGX
        elif hasattr(core, "ncnn"):
            backend = Backend.NCNN_VK
        elif hasattr(core, "ort"):
            backend = Backend.ORT_CPU
        else:
            backend = Backend.OV_CPU

    return backend(**_clean_keywords(kwargs, backend))