onnx ¶
This module implements scalers for ONNX models.
Classes:
-
ArtCNN
–Super-Resolution Convolutional Neural Networks optimised for anime.
-
BaseOnnxScaler
–Abstract generic scaler class for an ONNX model.
-
DPIR
–Deep Plug-and-Play Image Restoration
-
GenericOnnxScaler
–Generic scaler class for an ONNX model.
-
Waifu2x
–Well known Image Super-Resolution for Anime-Style Art.
Functions:
-
autoselect_backend
–Try to select the best backend for the current system.
Attributes:
-
BackendLike
–Type alias for anything that can resolve to a Backend from vs-mlrt.
BackendLike module-attribute
¶
Type alias for anything that can resolve to a Backend from vs-mlrt.
This includes: - A string identifier. - A class type subclassing Backend
. - An instance of a Backend
.
ArtCNN ¶
ArtCNN(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNLuma
Super-Resolution Convolutional Neural Networks optimised for anime.
A quick reminder that vs-mlrt does not ship these in the base package. You will have to grab the extended models pack or get it from the repo itself. (And create an "ArtCNN" folder in your models folder yourself)
https://github.com/Artoriuz/ArtCNN/releases/latest
Defaults to R8F64.
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Classes:
-
C16F64
–Very fast and good enough for AA purposes but the onnx variant is officially deprecated.
-
C16F64_Chroma
–The bigger of the old chroma models.
-
C16F64_DS
–The same as C16F64 but intended to also denoise and sharpen.
-
C4F16
–This has 4 internal convolution layers with 16 filters each.
-
C4F16_DN
–The same as C4F16 but intended to also denoise. Works well on noisy sources when you don't want any sharpening.
-
C4F16_DS
–The same as C4F16 but intended to also denoise and sharpen.
-
C4F32
–This has 4 internal convolution layers with 32 filters each.
-
C4F32_Chroma
–The smaller of the chroma models.
-
C4F32_DN
–The same as C4F32 but intended to also denoise. Works well on noisy sources when you don't want any sharpening.
-
C4F32_DS
–The same as C4F32 but intended to also denoise and sharpen.
-
R16F96
–The biggest model. Can compete with or outperform Waifu2x Cunet.
-
R16F96_Chroma
–The biggest and fancy chroma model. Shows almost biblical results on the right sources.
-
R8F64
–A smaller and faster version of R16F96 but very competitive.
-
R8F64_Chroma
–The new and fancy big chroma model.
-
R8F64_DS
–The same as R8F64 but intended to also denoise and sharpen.
-
R8F64_JPEG420
–1x RGB model meant to clean JPEG artifacts and to fix chroma subsampling.
-
R8F64_JPEG444
–1x RGB model meant to clean JPEG artifacts.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
C16F64 ¶
C16F64(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNLuma
Very fast and good enough for AA purposes but the onnx variant is officially deprecated.
This has 16 internal convolution layers with 64 filters each.
ONNX files available at https://github.com/Artoriuz/ArtCNN/tree/388b91797ff2e675fd03065953cc1147d6f972c2/ONNX
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN.C16F64().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569 570 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
C16F64_Chroma ¶
C16F64_Chroma(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNChroma
The bigger of the old chroma models.
These don't double the input clip and rather just try to enhance the chroma using luma information.
Example usage:
from vsscale import ArtCNN
chroma_upscaled = ArtCNN.C16F64_Chroma().scale(clip)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
634 635 636 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
C16F64_DS ¶
C16F64_DS(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNLuma
The same as C16F64 but intended to also denoise and sharpen.
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN.C16F64_DS().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569 570 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
C4F16 ¶
C4F16(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNLuma
This has 4 internal convolution layers with 16 filters each.
The currently fastest variant. Not really recommended for any filtering. Should strictly be used for real-time applications and even then the other non R ones should be fast enough...
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN.C4F16().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569 570 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
C4F16_DN ¶
C4F16_DN(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNLuma
The same as C4F16 but intended to also denoise. Works well on noisy sources when you don't want any sharpening.
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN.C4F16_DN().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569 570 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
C4F16_DS ¶
C4F16_DS(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNLuma
The same as C4F16 but intended to also denoise and sharpen.
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN.C4F16_DS().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569 570 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
C4F32 ¶
C4F32(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNLuma
This has 4 internal convolution layers with 32 filters each.
If you need an even faster model.
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN.C4F32().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569 570 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
C4F32_Chroma ¶
C4F32_Chroma(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNChroma
The smaller of the chroma models.
These don't double the input clip and rather just try to enhance the chroma using luma information.
Example usage:
from vsscale import ArtCNN
chroma_upscaled = ArtCNN.C4F32_Chroma().scale(clip)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
634 635 636 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
C4F32_DN ¶
C4F32_DN(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNLuma
The same as C4F32 but intended to also denoise. Works well on noisy sources when you don't want any sharpening.
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN.C4F32_DN().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569 570 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
C4F32_DS ¶
C4F32_DS(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNLuma
The same as C4F32 but intended to also denoise and sharpen.
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN.C4F32_DS().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569 570 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
R16F96 ¶
R16F96(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNLuma
The biggest model. Can compete with or outperform Waifu2x Cunet.
Also quite a bit slower but is less heavy on vram.
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN.R16F96().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569 570 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
R16F96_Chroma ¶
R16F96_Chroma(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNChroma
The biggest and fancy chroma model. Shows almost biblical results on the right sources.
These don't double the input clip and rather just try to enhance the chroma using luma information.
Example usage:
from vsscale import ArtCNN
chroma_upscaled = ArtCNN.R16F96_Chroma().scale(clip)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
634 635 636 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
R8F64 ¶
R8F64(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNLuma
A smaller and faster version of R16F96 but very competitive.
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN.R8F64().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569 570 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
R8F64_Chroma ¶
R8F64_Chroma(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNChroma
The new and fancy big chroma model.
These don't double the input clip and rather just try to enhance the chroma using luma information.
Example usage:
from vsscale import ArtCNN
chroma_upscaled = ArtCNN.R8F64_Chroma().scale(clip)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
634 635 636 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
R8F64_DS ¶
R8F64_DS(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNNLuma
The same as R8F64 but intended to also denoise and sharpen.
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN.R8F64_DS().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569 570 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
R8F64_JPEG420 ¶
R8F64_JPEG420(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNN
, BaseOnnxScalerRGB
1x RGB model meant to clean JPEG artifacts and to fix chroma subsampling.
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN.R8F64_JPEG420().scale(clip)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
R8F64_JPEG444 ¶
R8F64_JPEG444(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNN
, BaseOnnxScalerRGB
1x RGB model meant to clean JPEG artifacts.
Example usage:
from vsscale import ArtCNN
doubled = ArtCNN.R8F64_JPEG444().scale(clip)
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569 570 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
BaseArtCNN ¶
BaseArtCNN(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseOnnxScaler
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
–Performs preprocessing on the clip prior to inference.
-
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Performs preprocessing on the clip prior to inference.
Source code in vsscale/onnx.py
405 406 407 408 409 410 411 412 413 414 415 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
BaseArtCNNChroma ¶
BaseArtCNNChroma(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNN
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
634 635 636 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
BaseArtCNNLuma ¶
BaseArtCNNLuma(
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseArtCNN
Initializes the scaler with the specified parameters.
Parameters:
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
569 570 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
BaseDPIR ¶
BaseDPIR(
strength: SupportsFloat | VideoNode = 10,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseOnnxScaler
Initializes the scaler with the specified parameters.
Parameters:
-
strength
¶SupportsFloat | VideoNode
, default:10
) –Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
– -
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
strength
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1362 1363 1364 1365 1366 1367 1368 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
*,
copy_props: bool = True,
**kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
BaseOnnxScaler ¶
BaseOnnxScaler(
model: SPathLike | None = None,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
multiple: int = 1,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseGenericScaler
, ABC
Abstract generic scaler class for an ONNX model.
Initializes the scaler with the specified parameters.
Parameters:
-
model
¶SPathLike | None
, default:None
) –Path to the ONNX model file.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
multiple
¶int
, default:1
) –Multiple of the tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
–Performs preprocessing on the clip prior to inference.
-
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Performs preprocessing on the clip prior to inference.
Source code in vsscale/onnx.py
405 406 407 408 409 410 411 412 413 414 415 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
BaseOnnxScalerRGB ¶
BaseOnnxScalerRGB(
model: SPathLike | None = None,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
multiple: int = 1,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseOnnxScaler
Abstract ONNX class for RGB models.
Initializes the scaler with the specified parameters.
Parameters:
-
model
¶SPathLike | None
, default:None
) –Path to the ONNX model file.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
multiple
¶int
, default:1
) –Multiple of the tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
BaseWaifu2x ¶
BaseWaifu2x(
scale: Literal[1, 2, 4] = 2,
noise: Literal[-1, 0, 1, 2, 3] = -1,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseOnnxScaler
Initializes the scaler with the specified parameters.
Parameters:
-
scale
¶Literal[1, 2, 4]
, default:2
) –Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
-
noise
¶Literal[-1, 0, 1, 2, 3]
, default:-1
) –Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
–Performs preprocessing on the clip prior to inference.
-
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
noise
(Literal[-1, 0, 1, 2, 3]
) –Noise reduction level
-
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scale_w2x
(Literal[1, 2, 4]
) –Upscaling factor.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Performs preprocessing on the clip prior to inference.
Source code in vsscale/onnx.py
405 406 407 408 409 410 411 412 413 414 415 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
DPIR ¶
DPIR(
strength: SupportsFloat | VideoNode = 10,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseDPIR
Deep Plug-and-Play Image Restoration
Initializes the scaler with the specified parameters.
Parameters:
-
strength
¶SupportsFloat | VideoNode
, default:10
) –Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Classes:
-
DrunetDeblock
–DPIR model for deblocking.
-
DrunetDenoise
–DPIR model for denoising.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
– -
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
strength
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
DrunetDeblock ¶
DrunetDeblock(
strength: SupportsFloat | VideoNode = 10,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseDPIR
DPIR model for deblocking.
Initializes the scaler with the specified parameters.
Parameters:
-
strength
¶SupportsFloat | VideoNode
, default:10
) –Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
– -
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
strength
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1362 1363 1364 1365 1366 1367 1368 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
*,
copy_props: bool = True,
**kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
DrunetDenoise ¶
DrunetDenoise(
strength: SupportsFloat | VideoNode = 10,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseDPIR
DPIR model for denoising.
Initializes the scaler with the specified parameters.
Parameters:
-
strength
¶SupportsFloat | VideoNode
, default:10
) –Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
– -
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
strength
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1362 1363 1364 1365 1366 1367 1368 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
*,
copy_props: bool = True,
**kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1362 1363 1364 1365 1366 1367 1368 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
*,
copy_props: bool = True,
**kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
GenericOnnxScaler ¶
GenericOnnxScaler(
model: SPathLike | None = None,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
multiple: int = 1,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseOnnxScaler
Generic scaler class for an ONNX model.
Example usage:
from vsscale import GenericOnnxScaler
scaled = GenericOnnxScaler("path/to/model.onnx").scale(clip, ...)
# For Windows paths:
scaled = GenericOnnxScaler(r"path\to\model.onnx").scale(clip, ...)
Initializes the scaler with the specified parameters.
Parameters:
-
model
¶SPathLike | None
, default:None
) –Path to the ONNX model file.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
multiple
¶int
, default:1
) –Multiple of the tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
–Runs inference on the given video clip using the configured model and backend.
-
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
–Performs preprocessing on the clip prior to inference.
-
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Runs inference on the given video clip using the configured model and backend.
Source code in vsscale/onnx.py
434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Performs preprocessing on the clip prior to inference.
Source code in vsscale/onnx.py
405 406 407 408 409 410 411 412 413 414 415 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
Waifu2x ¶
Waifu2x(
scale: Literal[1, 2, 4] = 2,
noise: Literal[-1, 0, 1, 2, 3] = -1,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: _Waifu2xCunet
Well known Image Super-Resolution for Anime-Style Art.
Defaults to Cunet.
Example usage:
from vsscale import Waifu2x
doubled = Waifu2x().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
scale
¶Literal[1, 2, 4]
, default:2
) –Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
-
noise
¶Literal[-1, 0, 1, 2, 3]
, default:-1
) –Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Classes:
-
AnimeStyleArt
–Waifu2x model for anime-style art.
-
AnimeStyleArtRGB
–RGB version of the anime-style model.
-
Cunet
–CUNet (Compact U-Net) model for anime art.
-
Photo
–Waifu2x model trained on real-world photographic images.
-
SwinUnetArt
–Swin-Unet-based model trained on anime-style images.
-
SwinUnetArtScan
–Swin-Unet model trained on anime scans.
-
SwinUnetPhoto
–Swin-Unet model trained on photographic content.
-
SwinUnetPhotoV2
–Improved Swin-Unet model for photos (v2).
-
UpConv7AnimeStyleArt
–UpConv7 model variant optimized for anime-style images.
-
UpConv7Photo
–UpConv7 model variant optimized for photographic images.
-
UpResNet10
–UpResNet10 model offering a balance of speed and quality.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
noise
(Literal[-1, 0, 1, 2, 3]
) –Noise reduction level
-
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scale_w2x
(Literal[1, 2, 4]
) –Upscaling factor.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
AnimeStyleArt ¶
AnimeStyleArt(
scale: Literal[1, 2, 4] = 2,
noise: Literal[-1, 0, 1, 2, 3] = -1,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseWaifu2x
Waifu2x model for anime-style art.
Example usage:
from vsscale import Waifu2x
doubled = Waifu2x.AnimeStyleArt().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
scale
¶Literal[1, 2, 4]
, default:2
) –Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
-
noise
¶Literal[-1, 0, 1, 2, 3]
, default:-1
) –Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
–Handles postprocessing of the model's output after inference.
-
preprocess_clip
–Performs preprocessing on the clip prior to inference.
-
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
noise
(Literal[-1, 0, 1, 2, 3]
) –Noise reduction level
-
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scale_w2x
(Literal[1, 2, 4]
) –Upscaling factor.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Handles postprocessing of the model's output after inference.
Source code in vsscale/onnx.py
417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Performs preprocessing on the clip prior to inference.
Source code in vsscale/onnx.py
405 406 407 408 409 410 411 412 413 414 415 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
AnimeStyleArtRGB ¶
AnimeStyleArtRGB(
scale: Literal[1, 2, 4] = 2,
noise: Literal[-1, 0, 1, 2, 3] = -1,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseWaifu2x
, BaseOnnxScalerRGB
RGB version of the anime-style model.
Example usage:
from vsscale import Waifu2x
doubled = Waifu2x.AnimeStyleArtRGB().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
scale
¶Literal[1, 2, 4]
, default:2
) –Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
-
noise
¶Literal[-1, 0, 1, 2, 3]
, default:-1
) –Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
noise
(Literal[-1, 0, 1, 2, 3]
) –Noise reduction level
-
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scale_w2x
(Literal[1, 2, 4]
) –Upscaling factor.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
Cunet ¶
Cunet(
scale: Literal[1, 2, 4] = 2,
noise: Literal[-1, 0, 1, 2, 3] = -1,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: _Waifu2xCunet
CUNet (Compact U-Net) model for anime art.
Example usage:
from vsscale import Waifu2x
doubled = Waifu2x.Cunet().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
scale
¶Literal[1, 2, 4]
, default:2
) –Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
-
noise
¶Literal[-1, 0, 1, 2, 3]
, default:-1
) –Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
noise
(Literal[-1, 0, 1, 2, 3]
) –Noise reduction level
-
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scale_w2x
(Literal[1, 2, 4]
) –Upscaling factor.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.Additional Notes for the Cunet model:
- The model can cause artifacts around the image edges. To mitigate this, mirrored padding is applied to the image before inference. This behavior can be disabled by setting
inference_no_pad=True
. - A tint issue is also present but it is not constant. It leaves flat areas alone but tints detailed areas. Since most people will use Cunet to rescale details, the tint fix is enabled by default. This behavior can be disabled with
postprocess_no_tint_fix=True
- The model can cause artifacts around the image edges. To mitigate this, mirrored padding is applied to the image before inference. This behavior can be disabled by setting
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
Photo ¶
Photo(
scale: Literal[1, 2, 4] = 2,
noise: Literal[-1, 0, 1, 2, 3] = -1,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseWaifu2x
, BaseOnnxScalerRGB
Waifu2x model trained on real-world photographic images.
Example usage:
from vsscale import Waifu2x
doubled = Waifu2x.Photo().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
scale
¶Literal[1, 2, 4]
, default:2
) –Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
-
noise
¶Literal[-1, 0, 1, 2, 3]
, default:-1
) –Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
noise
(Literal[-1, 0, 1, 2, 3]
) –Noise reduction level
-
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scale_w2x
(Literal[1, 2, 4]
) –Upscaling factor.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
SwinUnetArt ¶
SwinUnetArt(
scale: Literal[1, 2, 4] = 2,
noise: Literal[-1, 0, 1, 2, 3] = -1,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseWaifu2x
, BaseOnnxScalerRGB
Swin-Unet-based model trained on anime-style images.
Example usage:
from vsscale import Waifu2x
doubled = Waifu2x.SwinUnetArt().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
scale
¶Literal[1, 2, 4]
, default:2
) –Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
-
noise
¶Literal[-1, 0, 1, 2, 3]
, default:-1
) –Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
noise
(Literal[-1, 0, 1, 2, 3]
) –Noise reduction level
-
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scale_w2x
(Literal[1, 2, 4]
) –Upscaling factor.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
SwinUnetArtScan ¶
SwinUnetArtScan(
scale: Literal[1, 2, 4] = 2,
noise: Literal[-1, 0, 1, 2, 3] = -1,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseWaifu2x
, BaseOnnxScalerRGB
Swin-Unet model trained on anime scans.
Example usage:
from vsscale import Waifu2x
doubled = Waifu2x.SwinUnetArtScan().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
scale
¶Literal[1, 2, 4]
, default:2
) –Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
-
noise
¶Literal[-1, 0, 1, 2, 3]
, default:-1
) –Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
noise
(Literal[-1, 0, 1, 2, 3]
) –Noise reduction level
-
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scale_w2x
(Literal[1, 2, 4]
) –Upscaling factor.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
SwinUnetPhoto ¶
SwinUnetPhoto(
scale: Literal[1, 2, 4] = 2,
noise: Literal[-1, 0, 1, 2, 3] = -1,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseWaifu2x
, BaseOnnxScalerRGB
Swin-Unet model trained on photographic content.
Example usage:
from vsscale import Waifu2x
doubled = Waifu2x.SwinUnetPhoto().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
scale
¶Literal[1, 2, 4]
, default:2
) –Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
-
noise
¶Literal[-1, 0, 1, 2, 3]
, default:-1
) –Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
noise
(Literal[-1, 0, 1, 2, 3]
) –Noise reduction level
-
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scale_w2x
(Literal[1, 2, 4]
) –Upscaling factor.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
SwinUnetPhotoV2 ¶
SwinUnetPhotoV2(
scale: Literal[1, 2, 4] = 2,
noise: Literal[-1, 0, 1, 2, 3] = -1,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseWaifu2x
, BaseOnnxScalerRGB
Improved Swin-Unet model for photos (v2).
Example usage:
from vsscale import Waifu2x
doubled = Waifu2x.SwinUnetPhotoV2().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
scale
¶Literal[1, 2, 4]
, default:2
) –Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
-
noise
¶Literal[-1, 0, 1, 2, 3]
, default:-1
) –Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
noise
(Literal[-1, 0, 1, 2, 3]
) –Noise reduction level
-
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scale_w2x
(Literal[1, 2, 4]
) –Upscaling factor.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
UpConv7AnimeStyleArt ¶
UpConv7AnimeStyleArt(
scale: Literal[1, 2, 4] = 2,
noise: Literal[-1, 0, 1, 2, 3] = -1,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseWaifu2x
, BaseOnnxScalerRGB
UpConv7 model variant optimized for anime-style images.
Example usage:
from vsscale import Waifu2x
doubled = Waifu2x.UpConv7AnimeStyleArt().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
scale
¶Literal[1, 2, 4]
, default:2
) –Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
-
noise
¶Literal[-1, 0, 1, 2, 3]
, default:-1
) –Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
noise
(Literal[-1, 0, 1, 2, 3]
) –Noise reduction level
-
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scale_w2x
(Literal[1, 2, 4]
) –Upscaling factor.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
UpConv7Photo ¶
UpConv7Photo(
scale: Literal[1, 2, 4] = 2,
noise: Literal[-1, 0, 1, 2, 3] = -1,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseWaifu2x
, BaseOnnxScalerRGB
UpConv7 model variant optimized for photographic images.
Example usage:
from vsscale import Waifu2x
doubled = Waifu2x.UpConv7Photo().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
scale
¶Literal[1, 2, 4]
, default:2
) –Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
-
noise
¶Literal[-1, 0, 1, 2, 3]
, default:-1
) –Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
noise
(Literal[-1, 0, 1, 2, 3]
) –Noise reduction level
-
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scale_w2x
(Literal[1, 2, 4]
) –Upscaling factor.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
UpResNet10 ¶
UpResNet10(
scale: Literal[1, 2, 4] = 2,
noise: Literal[-1, 0, 1, 2, 3] = -1,
backend: BackendLike | None = None,
tiles: int | tuple[int, int] | None = None,
tilesize: int | tuple[int, int] | None = None,
overlap: int | tuple[int, int] | None = None,
max_instances: int = 2,
*,
kernel: KernelLike = Catrom,
scaler: ScalerLike | None = None,
shifter: KernelLike | None = None,
**kwargs: Any
)
Bases: BaseWaifu2x
, BaseOnnxScalerRGB
UpResNet10 model offering a balance of speed and quality.
Example usage:
from vsscale import Waifu2x
doubled = Waifu2x.UpResNet10().scale(clip, clip.width * 2, clip.height * 2)
Initializes the scaler with the specified parameters.
Parameters:
-
scale
¶Literal[1, 2, 4]
, default:2
) –Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
-
noise
¶Literal[-1, 0, 1, 2, 3]
, default:-1
) –Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
-
backend
¶BackendLike | None
, default:None
) –The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
-
tiles
¶int | tuple[int, int] | None
, default:None
) –Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
-
tilesize
¶int | tuple[int, int] | None
, default:None
) –The size of each tile when splitting the image (if tiles are enabled).
-
overlap
¶int | tuple[int, int] | None
, default:None
) –The size of overlap between tiles.
-
max_instances
¶int
, default:2
) –Maximum instances to spawn when scaling a variable resolution clip.
-
kernel
¶KernelLike
, default:Catrom
) –Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.
-
scaler
¶ScalerLike | None
, default:None
) –Scaler used for scaling operations. Defaults to kernel.
-
shifter
¶KernelLike | None
, default:None
) –Kernel used for shifting operations. Defaults to kernel.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.
Methods:
-
calc_tilesize
–Reimplementation of vsmlrt.calc_tilesize helper function
-
ensure_obj
–Ensure that the input is a scaler instance, resolving it if necessary.
-
from_param
–Resolve and return a scaler type from a given input (string, type, or instance).
-
get_scale_args
–Generate the keyword arguments used for scaling.
-
implemented_funcs
–Returns a set of function names that are implemented in the current class and the parent classes.
-
inference
– -
kernel_radius
–Return the effective kernel radius for the scaler.
-
postprocess_clip
– -
preprocess_clip
– -
scale
–Scale the given clip using the ONNX model.
-
supersample
–Supersample a clip by a given scaling factor.
Attributes:
-
backend
– -
kernel
– -
kwargs
(dict[str, Any]
) –Arguments passed to the implemented funcs or internal scale function.
-
max_instances
– -
model
– -
multiple
– -
noise
(Literal[-1, 0, 1, 2, 3]
) –Noise reduction level
-
overlap
– -
overlap_h
– -
overlap_w
– -
pretty_string
(str
) –Cached property returning a user-friendly string representation.
-
scale_function
(Callable[..., VideoNode]
) –Scale function called internally when performing scaling operations.
-
scale_w2x
(Literal[1, 2, 4]
) –Upscaling factor.
-
scaler
– -
shifter
– -
tiles
– -
tilesize
–
Source code in vsscale/onnx.py
987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 |
|
kwargs instance-attribute
¶
Arguments passed to the implemented funcs or internal scale function.
pretty_string property
¶
pretty_string: str
Cached property returning a user-friendly string representation.
Returns:
-
str
–Pretty-printed string with arguments.
scale_function instance-attribute
¶
scale_function: Callable[..., VideoNode]
Scale function called internally when performing scaling operations.
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
calc_tilesize ¶
Reimplementation of vsmlrt.calc_tilesize helper function
Source code in vsscale/onnx.py
386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
ensure_obj classmethod
¶
ensure_obj(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> Self
Ensure that the input is a scaler instance, resolving it if necessary.
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
-
Self
–Scaler instance.
Source code in vskernels/abstract/base.py
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 |
|
from_param classmethod
¶
from_param(
scaler: str | type[Self] | Self | None = None,
/,
func_except: FuncExcept | None = None,
) -> type[Self]
Resolve and return a scaler type from a given input (string, type, or instance).
Parameters:
-
scaler
¶str | type[Self] | Self | None
, default:None
) –Scaler identifier (string, class, or instance).
-
func_except
¶FuncExcept | None
, default:None
) –Function returned for custom error handling.
Returns:
Source code in vskernels/abstract/base.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
get_scale_args ¶
get_scale_args(
clip: VideoNode,
shift: tuple[TopShift, LeftShift] = (0, 0),
width: int | None = None,
height: int | None = None,
**kwargs: Any
) -> dict[str, Any]
Generate the keyword arguments used for scaling.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left).
-
width
¶int | None
, default:None
) –Target width.
-
height
¶int | None
, default:None
) –Target height.
-
**kwargs
¶Any
, default:{}
) –Extra parameters to merge.
Returns:
Source code in vskernels/abstract/base.py
536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
implemented_funcs classmethod
¶
Returns a set of function names that are implemented in the current class and the parent classes.
These functions determine which keyword arguments will be extracted from the init method.
Returns:
Source code in vskernels/abstract/base.py
443 444 445 446 447 448 449 450 451 452 453 454 |
|
inference ¶
inference(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 |
|
kernel_radius ¶
kernel_radius() -> int
Return the effective kernel radius for the scaler.
Raises:
-
CustomNotImplementedError
–If no kernel radius is defined.
Returns:
-
int
–Kernel radius.
Source code in vskernels/abstract/base.py
406 407 408 409 410 411 412 413 414 415 416 417 |
|
postprocess_clip ¶
postprocess_clip(
clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> VideoNode
Source code in vsscale/onnx.py
1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 |
|
preprocess_clip ¶
preprocess_clip(clip: VideoNode, **kwargs: Any) -> VideoNode
Source code in vsscale/onnx.py
482 483 484 |
|
scale ¶
scale(
clip: VideoNode,
width: int | None = None,
height: int | None = None,
shift: tuple[float, float] = (0, 0),
**kwargs: Any
) -> VideoNode
Scale the given clip using the ONNX model.
Parameters:
-
clip
¶VideoNode
) –The input clip to be scaled.
-
width
¶int | None
, default:None
) –The target width for scaling. If None, the width of the input clip will be used.
-
height
¶int | None
, default:None
) –The target height for scaling. If None, the height of the input clip will be used.
-
shift
¶tuple[float, float]
, default:(0, 0)
) –A tuple representing the shift values for the x and y axes.
-
**kwargs
¶Any
, default:{}
) –Additional arguments to be passed to the
preprocess_clip
,postprocess_clip
,inference
, and_final_scale
methods. Use the prefixpreprocess_
orpostprocess_
to pass an argument to the respective method. Use the prefixinference_
to pass an argument to the inference method.Additional Notes for the Cunet model:
- The model can cause artifacts around the image edges. To mitigate this, mirrored padding is applied to the image before inference. This behavior can be disabled by setting
inference_no_pad=True
. - A tint issue is also present but it is not constant. It leaves flat areas alone but tints detailed areas. Since most people will use Cunet to rescale details, the tint fix is enabled by default. This behavior can be disabled with
postprocess_no_tint_fix=True
- The model can cause artifacts around the image edges. To mitigate this, mirrored padding is applied to the image before inference. This behavior can be disabled by setting
Returns:
-
VideoNode
–The scaled clip.
Source code in vsscale/onnx.py
1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 |
|
supersample ¶
supersample(
clip: VideoNode,
rfactor: float = 2.0,
shift: tuple[TopShift, LeftShift] = (0, 0),
**kwargs: Any
) -> VideoNode
Supersample a clip by a given scaling factor.
Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.
Parameters:
-
clip
¶VideoNode
) –The source clip.
-
rfactor
¶float
, default:2.0
) –Scaling factor for supersampling.
-
shift
¶tuple[TopShift, LeftShift]
, default:(0, 0)
) –Subpixel shift (top, left) applied during scaling.
-
**kwargs
¶Any
, default:{}
) –Additional arguments forwarded to the scale function.
Raises:
-
CustomValueError
–If resulting resolution is non-positive.
Returns:
-
VideoNode
–The supersampled clip.
Source code in vskernels/abstract/base.py
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 |
|
autoselect_backend ¶
Try to select the best backend for the current system.
If the system has an NVIDIA GPU: TRT > TRT_RTX > DirectML (D3D12) > NCNN (Vulkan) > CUDA (ORT) > OpenVINO GPU. Else: DirectML (D3D12) > MIGraphX > NCNN (Vulkan) > CPU (ORT) > CPU OpenVINO
Parameters:
Returns:
-
backendT
–The selected backend.
Source code in vsscale/onnx.py
160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 |
|