Skip to content

rescale

Classes:

  • Rescale

    Rescale wrapper supporting everything you need for (fractional) descaling,

  • RescaleBase

    Base class for Rescale wrapper

Rescale

Rescale(
    clip: VideoNode,
    /,
    height: int | float,
    kernel: KernelLike,
    upscaler: ScalerLike = ArtCNN,
    downscaler: ScalerLike = Hermite(linear=True),
    width: int | float | None = None,
    base_height: int | None = None,
    base_width: int | None = None,
    crop: tuple[LeftCrop, RightCrop, TopCrop, BottomCrop] = CropRel(),
    shift: tuple[TopShift, LeftShift] = (0, 0),
    field_based: FieldBasedLike | bool | None = None,
    border_handling: int | BorderHandling = MIRROR,
    **kwargs: Any,
)

Bases: RescaleBase

Rescale wrapper supporting everything you need for (fractional) descaling, re-upscaling and masking-out details.

Examples usage:

  • Basic 720p rescale:

    from vsscale import Rescale
    from vskernels import Bilinear
    
    rs = Rescale(clip, 720, Bilinear)
    final = rs.upscale
    
  • Adding aa and dehalo on doubled clip:

    from vsaa import based_aa
    from vsdehalo import fine_dehalo
    
    aa = based_aa(rs.doubled, supersampler=False)
    dehalo = fine_dehalo(aa, ...)
    
    rs.doubled = dehalo
    
  • Loading line_mask and credit_mask:

    from vsmasktools import diff_creditless_oped
    from vsexprtools import ExprOp
    
    rs.default_line_mask()
    
    oped_credit_mask = diff_creditless_oped(...)
    credit_mask = rs.default_credit_mask(thr=0.209, ranges=(200, 300), postfilter=4)
    rs.credit_mask = ExprOp.ADD.combine(oped_credit_mask, credit_mask)
    
  • Fractional rescale:

    from vsscale import Rescale
    from vskernels import Bilinear
    
    # Forcing the height to a float will ensure a fractional descale
    rs = Rescale(clip, 800.0, Bilinear)
    >>> rs.descale_args
    ScalingArgs(
        width=1424, height=800, src_width=1422.2222222222222, src_height=800.0,
        src_top=0.0, src_left=0.8888888888889142, mode='hw'
    )
    
    # while doing this will not
    rs = Rescale(clip, 800, Bilinear)
    >>> rs.descale_args
    ScalingArgs(width=1422, height=800, src_width=1422, src_height=800, src_top=0, src_left=0, mode='hw')
    
  • Cropping is also supported:

    from vsscale import Rescale
    from vskernels import Bilinear
    
    # Descaling while cropping the letterboxes at the top and bottom
    rs = Rescale(clip, 874, Bilinear, crop=(0, 0, 202, 202))
    >>> rs.descale_args
    ScalingArgs(
        width=1554, height=548, src_width=1554.0, src_height=547.0592592592592,
        src_top=0.4703703703703752, src_left=0.0, mode='hw'
    )
    
    # Same thing but ensuring the width is fractional descaled
    rs = Rescale(clip, 874.0, Bilinear, crop=(0, 0, 202, 202))
    >>> rs.descale_args
    ScalingArgs(
        width=1554, height=548, src_width=1553.7777777777778, src_height=547.0592592592592,
        src_top=0.4703703703703752, src_left=0.11111111111108585, mode='hw'
    )
    

Initialize the rescaling process.

Parameters:

  • clip

    (VideoNode) –

    Clip to be rescaled.

  • height

    (int | float) –

    Height to be descaled to. If passed as a float, a fractional descale is performed.

  • kernel

    (KernelLike) –

    Kernel used for descaling.

  • upscaler

    (ScalerLike, default: ArtCNN ) –

    Scaler that supports doubling. Defaults to ArtCNN.

  • downscaler

    (ScalerLike, default: Hermite(linear=True) ) –

    Scaler used to downscale the upscaled clip back to input resolution. Defaults to Hermite(linear=True).

  • width

    (int | float | None, default: None ) –

    Width to be descaled to. If None, it is automatically calculated from the height.

  • base_height

    (int | None, default: None ) –

    Integer height to contain the clip within. If None, it is automatically calculated from the height.

  • base_width

    (int | None, default: None ) –

    Integer width to contain the clip within. If None, it is automatically calculated from the width.

  • crop

    (tuple[LeftCrop, RightCrop, TopCrop, BottomCrop], default: CropRel() ) –

    Cropping values to apply before descale. The ratio descale_height / source_height is preserved even after descale. The cropped area is restored when calling the upscale property.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Pixel shifts to apply during descale and upscale. Defaults to (0, 0).

  • field_based

    (FieldBasedLike | bool | None, default: None ) –

    Whether the input is cross-converted or interlaced content.

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Adjusts how the clip is padded internally during the scaling process. Accepted values are:

    • 0 (MIRROR): Assume the image was resized with mirror padding.
    • 1 (ZERO): Assume the image was resized with zero padding.
    • 2 (EXTEND): Assume the image was resized with extend padding, where the outermost row was extended infinitely far.

    Defaults to 0.

Methods:

  • default_credit_mask

    Load a credit mask by making a difference mask between src and rescaled clips

  • default_line_mask

    Load a default Kirsch line mask in the class instance. Additionally, it is returned.

Attributes:

  • credit_mask (VideoNode) –

    Gets the credit mask to be applied on the upscaled clip.

  • descale

    Gets the descaled clip.

  • descale_args

    Descale arguments. See ScalingArgs

  • doubled

    Gets the doubled clip.

  • ignore_mask (VideoNode) –

    Gets the ignore mask to be applied on the descaled clip.

  • line_mask (VideoNode) –

    Gets the lineart mask to be applied on the upscaled clip.

  • rescale

    Gets the rescaled clip.

  • upscale

    Returns the upscaled clip

Source code in vsscale/rescale.py
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
def __init__(
    self,
    clip: vs.VideoNode,
    /,
    height: int | float,
    kernel: KernelLike,
    upscaler: ScalerLike = ArtCNN,
    downscaler: ScalerLike = Hermite(linear=True),
    width: int | float | None = None,
    base_height: int | None = None,
    base_width: int | None = None,
    crop: tuple[LeftCrop, RightCrop, TopCrop, BottomCrop] = CropRel(),
    shift: tuple[TopShift, LeftShift] = (0, 0),
    field_based: FieldBasedLike | bool | None = None,
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    **kwargs: Any,
) -> None:
    """
    Initialize the rescaling process.

    Args:
        clip: Clip to be rescaled.
        height: Height to be descaled to. If passed as a float, a fractional descale is performed.
        kernel: Kernel used for descaling.
        upscaler: Scaler that supports doubling. Defaults to ``ArtCNN``.
        downscaler: Scaler used to downscale the upscaled clip back to input resolution. Defaults to
            ``Hermite(linear=True)``.
        width: Width to be descaled to. If ``None``, it is automatically calculated from the height.
        base_height: Integer height to contain the clip within. If ``None``, it is automatically calculated from the
            height.
        base_width: Integer width to contain the clip within. If ``None``, it is automatically calculated from the
            width.
        crop: Cropping values to apply before descale. The ratio ``descale_height / source_height`` is preserved
            even after descale. The cropped area is restored when calling the ``upscale`` property.
        shift: Pixel shifts to apply during descale and upscale. Defaults to ``(0, 0)``.
        field_based: Whether the input is cross-converted or interlaced content.
        border_handling: Adjusts how the clip is padded internally during the scaling process.
            Accepted values are:

               - ``0`` (MIRROR): Assume the image was resized with mirror padding.
               - ``1`` (ZERO):   Assume the image was resized with zero padding.
               - ``2`` (EXTEND): Assume the image was resized with extend padding,
                 where the outermost row was extended infinitely far.

            Defaults to ``0``.
    """
    self._line_mask: vs.VideoNode | None = None
    self._credit_mask: vs.VideoNode | None = None
    self._ignore_mask: vs.VideoNode | None = None
    self._crop = crop
    self._pre = clip

    self.descale_args = ScalingArgs.from_args(
        clip, height, width, base_height, base_width, shift[0], shift[1], crop, mode="hw"
    )

    super().__init__(clip, kernel, upscaler, downscaler, field_based, border_handling, **kwargs)

    if self._crop > (0, 0, 0, 0):
        self._clipy = self._clipy.std.Crop(*self._crop)

credit_mask deletable property writable

credit_mask: VideoNode

Gets the credit mask to be applied on the upscaled clip.

descale class-attribute instance-attribute

descale = cachedproperty[VideoNode, VideoNode](
    lambda self: _generate_descale(_clipy),
    lambda self, value: update_cache(self, "descale", value),
    lambda self: clear_cache(
        self, ["descale", "rescale", "doubled", "upscale"]
    ),
)

Gets the descaled clip.

descale_args instance-attribute

descale_args = from_args(
    clip,
    height,
    width,
    base_height,
    base_width,
    shift[0],
    shift[1],
    crop,
    mode="hw",
)

Descale arguments. See ScalingArgs

doubled class-attribute instance-attribute

doubled = cachedproperty[VideoNode, VideoNode](
    lambda self: _generate_doubled(descale),
    lambda self, value: update_cache(self, "doubled", value),
    lambda self: clear_cache(self, ["doubled", "upscale"]),
)

Gets the doubled clip.

ignore_mask deletable property writable

ignore_mask: VideoNode

Gets the ignore mask to be applied on the descaled clip.

line_mask deletable property writable

line_mask: VideoNode

Gets the lineart mask to be applied on the upscaled clip.

rescale class-attribute instance-attribute

rescale = cachedproperty[VideoNode, VideoNode](
    lambda self: _generate_rescale(descale),
    lambda self, value: update_cache(self, "rescale", value),
    lambda self: clear_cache(self, "rescale"),
)

Gets the rescaled clip.

upscale class-attribute instance-attribute

upscale = cachedproperty[VideoNode, VideoNode](
    lambda self: CopyFrameProps(
        join([_generate_upscale(doubled), *(_chroma)]),
        _clipy,
        "_ChromaLocation",
    ),
    lambda self, value: update_cache(self, "upscale", value),
    lambda self: clear_cache(self, "upscale"),
)

Returns the upscaled clip

default_credit_mask

default_credit_mask(
    rescale: VideoNode | None = None,
    src: VideoNode | None = None,
    thr: float = 0.216,
    expand: int = 4,
    ranges: FrameRangeN | FrameRangesN | None = None,
    stabilize: bool = True,
    scenechanges: Iterable[int] | None = None,
    **kwargs: Any
) -> VideoNode

Load a credit mask by making a difference mask between src and rescaled clips

Parameters:

  • rescale

    (VideoNode | None, default: None ) –

    Rescaled clip, defaults to rescaled instance clip.

  • src

    (VideoNode | None, default: None ) –

    Source clip, defaults to source instance clip.

  • thr

    (float, default: 0.216 ) –

    Threshold of the amplification expr, defaults to 0.216.

  • expand

    (int, default: 4 ) –

    Additional expand radius applied to the mask, defaults to 4.

  • ranges

    (FrameRangeN | FrameRangesN | None, default: None ) –

    If specified, ranges to apply the credit clip to.

  • stabilize

    (bool, default: True ) –

    Try to stabilize the mask by applying a temporal convolution and then binarized by a threshold. Only works when there are ranges specified.

  • scenechanges

    (Iterable[int] | None, default: None ) –

    Explicit list of scenechange frames for stabilizing the mask.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments for based_diff_mask

Returns:

  • VideoNode

    Generated mask.

Source code in vsscale/rescale.py
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def default_credit_mask(
    self,
    rescale: vs.VideoNode | None = None,
    src: vs.VideoNode | None = None,
    thr: float = 0.216,
    expand: int = 4,
    ranges: FrameRangeN | FrameRangesN | None = None,
    stabilize: bool = True,
    scenechanges: Iterable[int] | None = None,
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Load a credit mask by making a difference mask between src and rescaled clips

    Args:
        rescale: Rescaled clip, defaults to rescaled instance clip.
        src: Source clip, defaults to source instance clip.
        thr: Threshold of the amplification expr, defaults to 0.216.
        expand: Additional expand radius applied to the mask, defaults to 4.
        ranges: If specified, ranges to apply the credit clip to.
        stabilize: Try to stabilize the mask by applying a temporal convolution and then binarized by a threshold.
            Only works when there are ranges specified.
        scenechanges: Explicit list of scenechange frames for stabilizing the mask.
        **kwargs: Additional keyword arguments for [based_diff_mask][vsmasktools.based_diff_mask]

    Returns:
        Generated mask.
    """
    if not src:
        src = self._clipy
    if not rescale:
        rescale = self.rescale

    src, rescale = get_y(src), get_y(rescale)

    credit_mask = based_diff_mask(src, rescale, thr=thr, expand=expand, func=self.default_credit_mask, **kwargs)

    if ranges is not None:
        if stabilize:
            credit_mask = stabilize_mask(credit_mask, 3, ranges, scenechanges, func=self.default_credit_mask)

        credit_mask = replace_ranges(credit_mask.std.BlankClip(keep=True), credit_mask, ranges)

    self.credit_mask = credit_mask

    return self.credit_mask

default_line_mask

default_line_mask(
    clip: VideoNode | None = None, scaler: ScalerLike = Bilinear, **kwargs: Any
) -> VideoNode

Load a default Kirsch line mask in the class instance. Additionally, it is returned.

Parameters:

  • clip

    (VideoNode | None, default: None ) –

    Reference clip, defaults to doubled clip if None.

  • scaler

    (ScalerLike, default: Bilinear ) –

    Scaled used for matching the source clip format, defaults to Bilinear.

Returns:

  • VideoNode

    Generated mask.

Source code in vsscale/rescale.py
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
def default_line_mask(
    self, clip: vs.VideoNode | None = None, scaler: ScalerLike = Bilinear, **kwargs: Any
) -> vs.VideoNode:
    """
    Load a default Kirsch line mask in the class instance. Additionally, it is returned.

    Args:
        clip: Reference clip, defaults to doubled clip if None.
        scaler: Scaled used for matching the source clip format, defaults to Bilinear.

    Returns:
        Generated mask.
    """
    scaler = Scaler.ensure_obj(scaler)
    scale_kwargs = scaler.kwargs if clip else self.descale_args.kwargs(self.doubled) | scaler.kwargs

    clip = clip if clip else self.doubled

    line_mask = Kirsch.edgemask(clip, **kwargs).std.Maximum().std.Minimum()
    line_mask = scaler.scale(
        line_mask, self._clipy.width, self._clipy.height, format=self._clipy.format, **scale_kwargs
    )

    self.line_mask = line_mask

    return self.line_mask

RescaleBase

RescaleBase(
    clip: VideoNode,
    /,
    kernel: KernelLike,
    upscaler: ScalerLike = ArtCNN,
    downscaler: ScalerLike = Hermite(linear=True),
    field_based: FieldBasedLike | bool | None = None,
    border_handling: int | BorderHandling = MIRROR,
    **kwargs: Any,
)

Bases: VSObjectABC

Base class for Rescale wrapper

Attributes:

Source code in vsscale/rescale.py
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
def __init__(
    self,
    clip: vs.VideoNode,
    /,
    kernel: KernelLike,
    upscaler: ScalerLike = ArtCNN,
    downscaler: ScalerLike = Hermite(linear=True),
    field_based: FieldBasedLike | bool | None = None,
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    **kwargs: Any,
) -> None:
    self._clipy, *chroma = split(clip)
    self._chroma = chroma

    self._kernel = Kernel.ensure_obj(kernel)
    self._upscaler = Scaler.ensure_obj(upscaler)

    self._downscaler = Scaler.ensure_obj(downscaler)

    self._field_based = FieldBased.from_param(field_based)

    self._border_handling = BorderHandling(int(border_handling))

    self.__add_props = kwargs.get("_add_props")

descale class-attribute instance-attribute

descale = cachedproperty[VideoNode, VideoNode](
    lambda self: _generate_descale(_clipy),
    lambda self, value: update_cache(self, "descale", value),
    lambda self: clear_cache(
        self, ["descale", "rescale", "doubled", "upscale"]
    ),
)

Gets the descaled clip.

descale_args instance-attribute

descale_args: ScalingArgs

Descale arguments. See ScalingArgs

doubled class-attribute instance-attribute

doubled = cachedproperty[VideoNode, VideoNode](
    lambda self: _generate_doubled(descale),
    lambda self, value: update_cache(self, "doubled", value),
    lambda self: clear_cache(self, ["doubled", "upscale"]),
)

Gets the doubled clip.

rescale class-attribute instance-attribute

rescale = cachedproperty[VideoNode, VideoNode](
    lambda self: _generate_rescale(descale),
    lambda self, value: update_cache(self, "rescale", value),
    lambda self: clear_cache(self, "rescale"),
)

Gets the rescaled clip.

upscale class-attribute instance-attribute

upscale = cachedproperty[VideoNode, VideoNode](
    lambda self: CopyFrameProps(
        join([_generate_upscale(doubled), *(_chroma)]),
        _clipy,
        "_ChromaLocation",
    ),
    lambda self, value: update_cache(self, "upscale", value),
    lambda self: clear_cache(self, "upscale"),
)

Returns the upscaled clip