Skip to content

regress

Classes:

  • ChromaReconstruct

    Class to ease the creation and usage of chroma reconstruction

  • GenericChromaRecon

    Generic ChromaReconstruct which implements base functions.

  • MissingFieldsChromaRecon

    Base helper function for reconstructing chroma with missing fields.

  • PAWorksChromaRecon

    Chroma reconstructor for 720p PAWorks chroma which undergoes through the following mangling process:

  • Point422ChromaRecon

    Demangler for content that has undergone from 4:4:4 => 4:2:2 with point, then 4:2:0 with some neutral scaler.

  • ReconDiffMode

    Enum for configuring a reconstruction difference mode.

  • ReconOutput

    Enum to decide what combination of luma-chroma to output in ChromaReconstruct

  • Regression

    Class for math operation on a clip.

ChromaReconstruct dataclass

ChromaReconstruct(
    *, kernel: KernelLike = Catrom, scaler: ScalerLike | None = None
)

Bases: ABC

Class to ease the creation and usage of chroma reconstruction based on linear regression between luma-demangled luma and chroma-demangled chroma.

The reconstruction depends on the following plugin
  • https://github.com/Jaded-Encoding-Thaumaturgy/vapoursynth-reconstruct

Methods:

  • debug

    In 'debug' mode you can see the various steps of mangled and demangled planes.

  • demangle_chroma

    Return the demangled luma as best quality as you can.

  • demangle_luma

    Return the demangled luma. You may use the y_base to limit the damage that was done in get_mangled_luma

  • get_base_clip

    Get the base clip on which the linear regression will be applied to.

  • get_chroma_shift
  • get_mangled_luma

    Return the mangled luma to the base resolution of the content.

  • reconstruct

    Run the actual reconstructing implemented in this class.

Attributes:

  • kernel (KernelLike) –

    Base kernel used to shift/scale luma and chroma planes.

  • scaler (ScalerLike | None) –

    Base kernel used to shift/scale luma and chroma planes.

kernel class-attribute instance-attribute

kernel: KernelLike = field(default=Catrom, kw_only=True)

Base kernel used to shift/scale luma and chroma planes.

scaler class-attribute instance-attribute

scaler: ScalerLike | None = field(default=None, kw_only=True)

Base kernel used to shift/scale luma and chroma planes.

debug

debug(clip: VideoNode, *args: Any, **kwargs: Any) -> tuple[VideoNode, ...]

In 'debug' mode you can see the various steps of mangled and demangled planes.

Useful to determine if shifts and the sort are correct.

The args, *kwargs don't do anything and are there just to be able to hotswap reconstruct with this method without removing other arguments.

Source code
574
575
576
577
578
579
580
581
582
583
584
585
586
@inject_self.init_kwargs
def debug(self, clip: vs.VideoNode, *args: Any, **kwargs: Any) -> tuple[vs.VideoNode, ...]:
    """
    In 'debug' mode you can see the various steps of mangled and demangled planes.

    Useful to determine if shifts and the sort are correct.

    The *args, **kwargs don't do anything and are there just to be able to
    hotswap reconstruct with this method without removing other arguments.
    """
    y, y_base, y_m, y_dm, chroma_base, chroma_dm = self._get_bases(clip, False, self.debug)

    return y, y_base, y_dm, *flatten(zip(chroma_base, chroma_dm))

demangle_chroma abstractmethod

demangle_chroma(mangled: VideoNode, y_base: VideoNode) -> VideoNode

Return the demangled luma as best quality as you can.

Assumes that the resolutions matches y_base.

Source code
536
537
538
539
540
541
542
@abstractmethod
def demangle_chroma(self, mangled: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    """
    Return the demangled luma as best quality as you can.

    Assumes that the resolutions matches ``y_base``.
    """

demangle_luma abstractmethod

demangle_luma(mangled: VideoNode, y_base: VideoNode) -> VideoNode

Return the demangled luma. You may use the y_base to limit the damage that was done in get_mangled_luma but it is important that some artifacting from demangling chroma in demangle_chroma remains.

May it be blurring or the interpolator artifacts (like SangNom random bright/dark pixels).

Assumes that the resolutions matches y_base.

Source code
525
526
527
528
529
530
531
532
533
534
@abstractmethod
def demangle_luma(self, mangled: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    """
    Return the demangled luma. You may use the y_base to limit the damage that was done in ``get_mangled_luma``
    but it is important that some artifacting from demangling chroma in ``demangle_chroma`` remains.

    May it be blurring or the interpolator artifacts (like SangNom random bright/dark pixels).

    Assumes that the resolutions matches ``y_base``.
    """

get_base_clip abstractmethod

get_base_clip(clip: VideoNode) -> VideoNode

Get the base clip on which the linear regression will be applied to.

Needs to be the native resolution the content was produced at. Additionally, chroma needs to be scaled to 444 for later comparison and overshoot/undershoot protection.

For example, if the anime is 720p native, this function needs to output 720p 4:4:4. Later, chroma will be upscaled at this resolution maximum and will be upscaled/downscaled to 420/444 based on out_mode in reconstruct.

Source code
499
500
501
502
503
504
505
506
507
508
509
510
511
@abstractmethod
def get_base_clip(self, clip: vs.VideoNode) -> vs.VideoNode:
    """
    Get the base clip on which the linear regression will be applied to.

    Needs to be the native resolution the content was produced at.
    Additionally, chroma needs to be scaled to 444 for later comparison
    and overshoot/undershoot protection.

    For example, if the anime is 720p native, this function needs to output 720p 4:4:4.
    Later, chroma will be upscaled at this resolution maximum and
    will be upscaled/downscaled to 420/444 based on ``out_mode`` in ``reconstruct``.
    """

get_chroma_shift

get_chroma_shift(y_width: int, c_width: int) -> float
Source code
544
545
def get_chroma_shift(self, y_width: int, c_width: int) -> float:
    return 0.5 * c_width / y_width

get_mangled_luma abstractmethod

get_mangled_luma(clip: VideoNode, y_base: VideoNode) -> VideoNode

Return the mangled luma to the base resolution of the content.

Chroma might have been further mangled or can be better demangled, but this method assumes that the luma will be taken as the same resolution as the INPUT clip.

So, for example, at 1080p 4:2:0 this method should return mangled luma like chroma was at 960x540. EVEN IF the native resolution is lower.

Source code
513
514
515
516
517
518
519
520
521
522
523
@abstractmethod
def get_mangled_luma(self, clip: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    """
    Return the mangled luma to the base resolution of the content.

    Chroma might have been further mangled or can be better demangled,
    but this method assumes that the luma will be taken as the same resolution as the INPUT clip.

    So, for example, at 1080p 4:2:0 this method should return mangled luma like chroma was at 960x540.
    EVEN IF the native resolution is lower.
    """

reconstruct

reconstruct(
    clip: VideoNode,
    sigma: float,
    radius: int,
    diff_mode: ReconDiffMode | ReconDiffModeConf,
    out_mode: ReconOutput | bool | None,
    include_edges: bool,
    lin_cutoff: float = 0.0,
    **kwargs: Any
) -> VideoNode

Run the actual reconstructing implemented in this class.

Parameters:

  • clip

    (VideoNode) –

    Input clip. Must be YUV.

  • sigma

    (float) –

    Sigma for gaussian blur of weights, higher value is useful to dampen wrong directions.

  • radius

    (int) –

    Radius of the reconstruct window. Higher will be more stable but also less sharp and will adhere less to luma.

  • diff_mode

    (ReconDiffMode | ReconDiffModeConf) –

    The mode to apply the difference to apply, calculated with linear regression, to the mangled chroma. Check ReconDiffMode to know what each mode means.

  • out_mode

    (ReconOutput | bool | None) –

    The luma/chroma output combination.

  • include_edges

    (bool) –

    Forcecully include all luma edges in the weighting.

  • lin_cutoff

    (float, default: 0.0 ) –

    Cutoff, or weight, in the linear regression.

Returns:

  • VideoNode

    Clip with demangled chroma.

Source code
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
@inject_self.init_kwargs
def reconstruct(
    self,
    clip: vs.VideoNode,
    sigma: float,
    radius: int,
    diff_mode: ReconDiffMode | ReconDiffModeConf,
    out_mode: ReconOutput | bool | None,
    include_edges: bool,
    lin_cutoff: float = 0.0,
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Run the actual reconstructing implemented in this class.

    Args:
        clip: Input clip. Must be YUV.
        sigma: Sigma for gaussian blur of weights, higher value is useful to dampen wrong directions.
        radius: Radius of the reconstruct window. Higher will be more stable but also less sharp and will adhere
            less to luma.
        diff_mode: The mode to apply the difference to apply, calculated with linear regression, to the mangled
            chroma. Check ``ReconDiffMode`` to know what each mode means.
        out_mode: The luma/chroma output combination.
        include_edges: Forcecully include all luma edges in the weighting.
        lin_cutoff: Cutoff, or weight, in the linear regression.

    Returns:
        Clip with demangled chroma.
    """

    y, y_base, y_m, y_dm, chroma_base, chroma_dm = self._get_bases(clip, include_edges, self.reconstruct)

    reg = Regression.from_param(Regression.BlurConf(gauss_blur, sigma=sigma))  # pyright: ignore

    if not isinstance(diff_mode, ReconDiffModeConf):
        diff_mode = diff_mode()

    chroma_regs = reg.linear([y_dm, *chroma_dm], lin_cutoff, diff_mode.inter_scale)

    y_diff = norm_expr((y_base, y_dm), "x y -", func=self.reconstruct)

    y_diffxb = gauss_blur(
        norm_expr((y_base, y_dm), f"x y / {reg.eps} 1 clamp"), diff_mode.diff_sigma, func=self.reconstruct
    )

    fixup = (
        y_diff.recon.Reconstruct(  # type: ignore
            reg.slope,
            reg.correlation,
            radius=radius,
            intercept=(None if diff_mode.inter_scale == 0.0 else reg.intercept),
        )
        for reg in chroma_regs
    )

    fixed_chroma = (
        norm_expr((dm, fix, y_diffxb, base), diff_mode.mode.value, func=self.reconstruct)
        for dm, fix, base in zip(chroma_dm, fixup, chroma_base)
    )

    out_mode = ReconOutput.from_param(out_mode)

    top_shift = left_shift = 0.0

    if out_mode == ReconOutput.i420:
        left_shift = -self.get_chroma_shift(y.width, y_m.height)
    elif include_edges:
        top_shift = left_shift = 0.125 / 2

    shifted_chroma = (self._kernel.shift(p, (top_shift, left_shift)) for p in fixed_chroma)

    if out_mode != ReconOutput.NATIVE:
        y_base, targ_sizes = y, (clip.width, clip.height)

        if out_mode == ReconOutput.i420:
            targ_sizes = tuple[int, int](targ_size // 2 for targ_size in targ_sizes)  # type: ignore

        shifted_chroma = (self._scaler.scale(p, *targ_sizes) for p in shifted_chroma)

    return depth(join(y_base, *shifted_chroma), clip)

GenericChromaRecon dataclass

GenericChromaRecon(
    native_res: int | float | None = None,
    native_kernel: KernelLike = Catrom,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    src_left: float = 0.5,
    src_top: float = 0.0
)

Bases: ChromaReconstruct

Generic ChromaReconstruct which implements base functions.

Not reccomended to use without customizing the mangling/demangling.

Methods:

Attributes:

kernel class-attribute instance-attribute

kernel: KernelLike = field(default=Catrom, kw_only=True)

Base kernel used to shift/scale luma and chroma planes.

native_kernel class-attribute instance-attribute

native_kernel: KernelLike = Catrom

Native kernel of the show.

native_res class-attribute instance-attribute

native_res: int | float | None = None

Native resolution of the show.

scaler class-attribute instance-attribute

scaler: ScalerLike | None = field(default=None, kw_only=True)

Base kernel used to shift/scale luma and chroma planes.

src_left class-attribute instance-attribute

src_left: float = field(default=0.5, kw_only=True)

Base left shift of the interpolator. If using base vsaa scaler, this will be interally compensated.

src_top class-attribute instance-attribute

src_top: float = field(default=0.0, kw_only=True)

Base top shift of the interpolator.

debug

debug(clip: VideoNode, *args: Any, **kwargs: Any) -> tuple[VideoNode, ...]

In 'debug' mode you can see the various steps of mangled and demangled planes.

Useful to determine if shifts and the sort are correct.

The args, *kwargs don't do anything and are there just to be able to hotswap reconstruct with this method without removing other arguments.

Source code
574
575
576
577
578
579
580
581
582
583
584
585
586
@inject_self.init_kwargs
def debug(self, clip: vs.VideoNode, *args: Any, **kwargs: Any) -> tuple[vs.VideoNode, ...]:
    """
    In 'debug' mode you can see the various steps of mangled and demangled planes.

    Useful to determine if shifts and the sort are correct.

    The *args, **kwargs don't do anything and are there just to be able to
    hotswap reconstruct with this method without removing other arguments.
    """
    y, y_base, y_m, y_dm, chroma_base, chroma_dm = self._get_bases(clip, False, self.debug)

    return y, y_base, y_dm, *flatten(zip(chroma_base, chroma_dm))

demangle_chroma

demangle_chroma(mangled: VideoNode, y_base: VideoNode) -> VideoNode
Source code
713
714
715
716
def demangle_chroma(self, mangled: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    return self._kernel.scale(
        mangled, y_base.width, y_base.height, (0, self.get_chroma_shift(y_base.width, mangled.width))
    )

demangle_luma

demangle_luma(mangled: VideoNode, y_base: VideoNode) -> VideoNode
Source code
718
719
720
721
722
def demangle_luma(self, mangled: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    src_left, self.src_left = self.src_left, self.src_left - 0.25
    luma = self.demangle_chroma(mangled, y_base)
    self.src_left = src_left
    return luma

get_base_clip

get_base_clip(clip: VideoNode) -> VideoNode
Source code
695
696
697
698
699
700
701
702
703
704
705
706
def get_base_clip(self, clip: vs.VideoNode) -> vs.VideoNode:
    if self.native_res is None:
        return self._kernel.resample(clip, vs.YUV444PS)

    de_args = ScalingArgs.from_args(clip, self.native_res)

    descale = self._native_kernel.descale(clip, de_args.width, de_args.height, **de_args.kwargs())

    return join(
        self._kernel.shift(descale, de_args.src_top / 2, -de_args.src_left / 2),
        self._scaler.scale(clip, de_args.width, de_args.height, format=vs.YUV444PS),
    )

get_chroma_shift

get_chroma_shift(y_width: int, c_width: int) -> float
Source code
544
545
def get_chroma_shift(self, y_width: int, c_width: int) -> float:
    return 0.5 * c_width / y_width

get_mangled_luma

get_mangled_luma(clip: VideoNode, y_base: VideoNode) -> VideoNode
Source code
708
709
710
711
def get_mangled_luma(self, clip: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    c_width, c_height = get_plane_sizes(clip, 1)

    return Catrom().scale(y_base, c_width, c_height, (0, -0.5 + self.get_chroma_shift(clip.width, c_width)))

reconstruct

reconstruct(
    clip: VideoNode,
    sigma: float = 1.5,
    radius: int = 2,
    diff_mode: ReconDiffMode | ReconDiffModeConf = MEAN,
    out_mode: ReconOutput | bool | None = i420,
    include_edges: bool = False,
    lin_cutoff: float = 0.0,
    **kwargs: Any
) -> VideoNode
Source code
724
725
726
727
728
729
730
731
732
733
734
735
736
@inject_self.init_kwargs
def reconstruct(
    self,
    clip: vs.VideoNode,
    sigma: float = 1.5,
    radius: int = 2,
    diff_mode: ReconDiffMode | ReconDiffModeConf = ReconDiffMode.MEAN,
    out_mode: ReconOutput | bool | None = ReconOutput.i420,
    include_edges: bool = False,
    lin_cutoff: float = 0.0,
    **kwargs: Any,
) -> vs.VideoNode:
    return super().reconstruct(clip, sigma, radius, diff_mode, out_mode, include_edges, lin_cutoff)

MissingFieldsChromaRecon dataclass

MissingFieldsChromaRecon(
    native_res: int | float | None = None,
    native_kernel: KernelLike = Catrom,
    dm_wscaler: ScalerLike = NNEDI3,
    dm_hscaler: ScalerLike | None = NNEDI3,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    src_left: float = 0.5,
    src_top: float = 0.0
)

Bases: GenericChromaRecon

Base helper function for reconstructing chroma with missing fields.

Methods:

Attributes:

dm_hscaler class-attribute instance-attribute

dm_hscaler: ScalerLike | None = NNEDI3

Scaler used to interpolate the height.

dm_wscaler class-attribute instance-attribute

dm_wscaler: ScalerLike = NNEDI3

Scaler used to interpolate the width/height.

kernel class-attribute instance-attribute

kernel: KernelLike = field(default=Catrom, kw_only=True)

Base kernel used to shift/scale luma and chroma planes.

native_kernel class-attribute instance-attribute

native_kernel: KernelLike = Catrom

Native kernel of the show.

native_res class-attribute instance-attribute

native_res: int | float | None = None

Native resolution of the show.

scaler class-attribute instance-attribute

scaler: ScalerLike | None = field(default=None, kw_only=True)

Base kernel used to shift/scale luma and chroma planes.

src_left class-attribute instance-attribute

src_left: float = field(default=0.5, kw_only=True)

Base left shift of the interpolator. If using base vsaa scaler, this will be interally compensated.

src_top class-attribute instance-attribute

src_top: float = field(default=0.0, kw_only=True)

Base top shift of the interpolator.

debug

debug(clip: VideoNode, *args: Any, **kwargs: Any) -> tuple[VideoNode, ...]

In 'debug' mode you can see the various steps of mangled and demangled planes.

Useful to determine if shifts and the sort are correct.

The args, *kwargs don't do anything and are there just to be able to hotswap reconstruct with this method without removing other arguments.

Source code
574
575
576
577
578
579
580
581
582
583
584
585
586
@inject_self.init_kwargs
def debug(self, clip: vs.VideoNode, *args: Any, **kwargs: Any) -> tuple[vs.VideoNode, ...]:
    """
    In 'debug' mode you can see the various steps of mangled and demangled planes.

    Useful to determine if shifts and the sort are correct.

    The *args, **kwargs don't do anything and are there just to be able to
    hotswap reconstruct with this method without removing other arguments.
    """
    y, y_base, y_m, y_dm, chroma_base, chroma_dm = self._get_bases(clip, False, self.debug)

    return y, y_base, y_dm, *flatten(zip(chroma_base, chroma_dm))

demangle_chroma

demangle_chroma(mangled: VideoNode, y_base: VideoNode) -> VideoNode
Source code
713
714
715
716
def demangle_chroma(self, mangled: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    return self._kernel.scale(
        mangled, y_base.width, y_base.height, (0, self.get_chroma_shift(y_base.width, mangled.width))
    )

demangle_luma

demangle_luma(mangled: VideoNode, y_base: VideoNode) -> VideoNode
Source code
718
719
720
721
722
def demangle_luma(self, mangled: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    src_left, self.src_left = self.src_left, self.src_left - 0.25
    luma = self.demangle_chroma(mangled, y_base)
    self.src_left = src_left
    return luma

get_base_clip

get_base_clip(clip: VideoNode) -> VideoNode
Source code
695
696
697
698
699
700
701
702
703
704
705
706
def get_base_clip(self, clip: vs.VideoNode) -> vs.VideoNode:
    if self.native_res is None:
        return self._kernel.resample(clip, vs.YUV444PS)

    de_args = ScalingArgs.from_args(clip, self.native_res)

    descale = self._native_kernel.descale(clip, de_args.width, de_args.height, **de_args.kwargs())

    return join(
        self._kernel.shift(descale, de_args.src_top / 2, -de_args.src_left / 2),
        self._scaler.scale(clip, de_args.width, de_args.height, format=vs.YUV444PS),
    )

get_chroma_shift

get_chroma_shift(y_width: int, c_width: int) -> float
Source code
544
545
def get_chroma_shift(self, y_width: int, c_width: int) -> float:
    return 0.5 * c_width / y_width

get_mangled_luma

get_mangled_luma(clip: VideoNode, y_base: VideoNode) -> VideoNode
Source code
708
709
710
711
def get_mangled_luma(self, clip: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    c_width, c_height = get_plane_sizes(clip, 1)

    return Catrom().scale(y_base, c_width, c_height, (0, -0.5 + self.get_chroma_shift(clip.width, c_width)))

reconstruct

reconstruct(
    clip: VideoNode,
    sigma: float = 1.5,
    radius: int = 2,
    diff_mode: ReconDiffMode | ReconDiffModeConf = MEAN,
    out_mode: ReconOutput | bool | None = i420,
    include_edges: bool = False,
    lin_cutoff: float = 0.0,
    **kwargs: Any
) -> VideoNode
Source code
724
725
726
727
728
729
730
731
732
733
734
735
736
@inject_self.init_kwargs
def reconstruct(
    self,
    clip: vs.VideoNode,
    sigma: float = 1.5,
    radius: int = 2,
    diff_mode: ReconDiffMode | ReconDiffModeConf = ReconDiffMode.MEAN,
    out_mode: ReconOutput | bool | None = ReconOutput.i420,
    include_edges: bool = False,
    lin_cutoff: float = 0.0,
    **kwargs: Any,
) -> vs.VideoNode:
    return super().reconstruct(clip, sigma, radius, diff_mode, out_mode, include_edges, lin_cutoff)

PAWorksChromaRecon dataclass

PAWorksChromaRecon(
    native_res: int | float | None = None,
    native_kernel: KernelLike = Catrom,
    dm_wscaler: ScalerLike = lambda: SangNom(128)(),
    dm_hscaler: ScalerLike = NNEDI3,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    src_left: float = 0.5,
    src_top: float = 0.0
)

Bases: MissingFieldsChromaRecon

Chroma reconstructor for 720p PAWorks chroma which undergoes through the following mangling process:

Produced at 720p 4:4:4 => 720p 4:2:2 => 720p 4:4:4 With Point, so the width gets halved fields, and the lowest it got is 640x720 => 1080p 4:4:4 => 1080p 4:2:2 => 1080p 4:2:0 With Catrom, so the width doesn't get affected, but gets downscaled to 960x540

Through this process, we know the lowest the chroma was is 640x540. 640 width from point 4:2:2 and 540 height from catrom 4:2:0.

With this information we can implement this demangler as follows
  • get_base_clip: descaled luma to 720p, upscaled chroma to 720p
  • get_mangled_luma: scale the descale to 620x720 (4:2:2 at 720p), then reupscale to 960x720 (4:4:4 at 720p)
    • thus, removing fields information -, then downscale the height to 540p. (4:2:0 at 1080p)
  • demangle_luma/demangle_chroma: downscale with point from 960x540 to 640x540 which was the lowest it got, to remove point interpolated fields, then reupscale.

    In the case of luma, we also limit the mangling by clamping the difference of the demanglers to the original descaled luma or details would just get crushed.

Methods:

Attributes:

dm_hscaler class-attribute instance-attribute

dm_hscaler: ScalerLike = NNEDI3

dm_wscaler class-attribute instance-attribute

dm_wscaler: ScalerLike = field(default_factory=lambda: SangNom(128))

kernel class-attribute instance-attribute

kernel: KernelLike = field(default=Catrom, kw_only=True)

Base kernel used to shift/scale luma and chroma planes.

native_kernel class-attribute instance-attribute

native_kernel: KernelLike = Catrom

Native kernel of the show.

native_res class-attribute instance-attribute

native_res: int | float | None = None

Native resolution of the show.

scaler class-attribute instance-attribute

scaler: ScalerLike | None = field(default=None, kw_only=True)

Base kernel used to shift/scale luma and chroma planes.

src_left class-attribute instance-attribute

src_left: float = field(default=0.5, kw_only=True)

Base left shift of the interpolator. If using base vsaa scaler, this will be interally compensated.

src_top class-attribute instance-attribute

src_top: float = field(default=0.0, kw_only=True)

Base top shift of the interpolator.

debug

debug(clip: VideoNode, *args: Any, **kwargs: Any) -> tuple[VideoNode, ...]

In 'debug' mode you can see the various steps of mangled and demangled planes.

Useful to determine if shifts and the sort are correct.

The args, *kwargs don't do anything and are there just to be able to hotswap reconstruct with this method without removing other arguments.

Source code
574
575
576
577
578
579
580
581
582
583
584
585
586
@inject_self.init_kwargs
def debug(self, clip: vs.VideoNode, *args: Any, **kwargs: Any) -> tuple[vs.VideoNode, ...]:
    """
    In 'debug' mode you can see the various steps of mangled and demangled planes.

    Useful to determine if shifts and the sort are correct.

    The *args, **kwargs don't do anything and are there just to be able to
    hotswap reconstruct with this method without removing other arguments.
    """
    y, y_base, y_m, y_dm, chroma_base, chroma_dm = self._get_bases(clip, False, self.debug)

    return y, y_base, y_dm, *flatten(zip(chroma_base, chroma_dm))

demangle_chroma

demangle_chroma(mangled: VideoNode, y_base: VideoNode) -> VideoNode
Source code
802
803
804
805
806
807
808
def demangle_chroma(self, mangled: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    demangled = vs.core.resize.Point(mangled, y_base.width // 2, mangled.height)

    demangled = self._dm_wscaler.scale(demangled, mangled.width, y_base.height, (self.src_top, 0))
    demangled = self._dm_hscaler.scale(demangled, y_base.width, y_base.height, (0, self.src_left))

    return demangled

demangle_luma

demangle_luma(mangled: VideoNode, y_base: VideoNode) -> VideoNode
Source code
810
811
812
813
814
815
def demangle_luma(self, mangled: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    a = self.demangle_chroma(mangled, y_base)

    y_base = self._kernel.shift(y_base, self.src_top, self.src_left)

    return limit_filter(a, y_base, a, thr=1, elast=4.5, bright_thr=10)

get_base_clip

get_base_clip(clip: VideoNode) -> VideoNode
Source code
695
696
697
698
699
700
701
702
703
704
705
706
def get_base_clip(self, clip: vs.VideoNode) -> vs.VideoNode:
    if self.native_res is None:
        return self._kernel.resample(clip, vs.YUV444PS)

    de_args = ScalingArgs.from_args(clip, self.native_res)

    descale = self._native_kernel.descale(clip, de_args.width, de_args.height, **de_args.kwargs())

    return join(
        self._kernel.shift(descale, de_args.src_top / 2, -de_args.src_left / 2),
        self._scaler.scale(clip, de_args.width, de_args.height, format=vs.YUV444PS),
    )

get_chroma_shift

get_chroma_shift(y_width: int, c_width: int) -> float
Source code
544
545
def get_chroma_shift(self, y_width: int, c_width: int) -> float:
    return 0.5 * c_width / y_width

get_mangled_luma

get_mangled_luma(clip: VideoNode, y_base: VideoNode) -> VideoNode
Source code
790
791
792
793
794
795
796
797
798
799
800
def get_mangled_luma(self, clip: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    cm_width, _ = get_plane_sizes(y_base, 1)
    c_width, c_height = get_plane_sizes(clip, 1)

    point = Point()

    y_m = point.scale(y_base, cm_width // 2, y_base.height, (0, -1))
    y_m = point.scale(y_m, c_width, y_base.height, (0, -0.25))
    y_m = Catrom().scale(y_m, c_width, c_height)

    return y_m

reconstruct

reconstruct(
    clip: VideoNode,
    sigma: float = 2.0,
    radius: int = 4,
    diff_mode: ReconDiffMode | ReconDiffModeConf = MEDIAN,
    out_mode: ReconOutput | bool | None = NATIVE,
    include_edges: bool = True,
    lin_cutoff: float = 0.0,
    **kwargs: Any
) -> VideoNode
Source code
817
818
819
820
821
822
823
824
825
826
827
828
829
@inject_self.init_kwargs
def reconstruct(
    self,
    clip: vs.VideoNode,
    sigma: float = 2.0,
    radius: int = 4,
    diff_mode: ReconDiffMode | ReconDiffModeConf = ReconDiffMode.MEDIAN,
    out_mode: ReconOutput | bool | None = ReconOutput.NATIVE,
    include_edges: bool = True,
    lin_cutoff: float = 0.0,
    **kwargs: Any,
) -> vs.VideoNode:
    return super().reconstruct(clip, sigma, radius, diff_mode, out_mode, include_edges, lin_cutoff)

Point422ChromaRecon dataclass

Point422ChromaRecon(
    native_res: int | float | None = None,
    native_kernel: KernelLike = Catrom,
    dm_wscaler: ScalerLike = lambda: SangNom(128)(),
    dm_hscaler: ScalerLike = lambda: EEDI3(
        0.35, 0.55, 20, 2, 10, vcheck=3, sclip=NNEDI3()
    )(),
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    src_left: float = 0.5,
    src_top: float = 0.0
)

Bases: MissingFieldsChromaRecon

Demangler for content that has undergone from 4:4:4 => 4:2:2 with point, then 4:2:0 with some neutral scaler.

Methods:

Attributes:

dm_hscaler class-attribute instance-attribute

dm_hscaler: ScalerLike = field(
    default_factory=lambda: EEDI3(
        0.35, 0.55, 20, 2, 10, vcheck=3, sclip=NNEDI3()
    )
)

dm_wscaler class-attribute instance-attribute

dm_wscaler: ScalerLike = field(default_factory=lambda: SangNom(128))

kernel class-attribute instance-attribute

kernel: KernelLike = field(default=Catrom, kw_only=True)

Base kernel used to shift/scale luma and chroma planes.

native_kernel class-attribute instance-attribute

native_kernel: KernelLike = Catrom

Native kernel of the show.

native_res class-attribute instance-attribute

native_res: int | float | None = None

Native resolution of the show.

scaler class-attribute instance-attribute

scaler: ScalerLike | None = field(default=None, kw_only=True)

Base kernel used to shift/scale luma and chroma planes.

src_left class-attribute instance-attribute

src_left: float = field(default=0.5, kw_only=True)

Base left shift of the interpolator. If using base vsaa scaler, this will be interally compensated.

src_top class-attribute instance-attribute

src_top: float = field(default=0.0, kw_only=True)

Base top shift of the interpolator.

debug

debug(clip: VideoNode, *args: Any, **kwargs: Any) -> tuple[VideoNode, ...]

In 'debug' mode you can see the various steps of mangled and demangled planes.

Useful to determine if shifts and the sort are correct.

The args, *kwargs don't do anything and are there just to be able to hotswap reconstruct with this method without removing other arguments.

Source code
574
575
576
577
578
579
580
581
582
583
584
585
586
@inject_self.init_kwargs
def debug(self, clip: vs.VideoNode, *args: Any, **kwargs: Any) -> tuple[vs.VideoNode, ...]:
    """
    In 'debug' mode you can see the various steps of mangled and demangled planes.

    Useful to determine if shifts and the sort are correct.

    The *args, **kwargs don't do anything and are there just to be able to
    hotswap reconstruct with this method without removing other arguments.
    """
    y, y_base, y_m, y_dm, chroma_base, chroma_dm = self._get_bases(clip, False, self.debug)

    return y, y_base, y_dm, *flatten(zip(chroma_base, chroma_dm))

demangle_chroma

demangle_chroma(mangled: VideoNode, y_base: VideoNode) -> VideoNode
Source code
844
845
846
def demangle_chroma(self, mangled: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    demangled = self._dm_hscaler.scale(mangled, mangled.width, y_base.height)
    return self._dm_wscaler.scale(demangled, y_base.width, y_base.height, (self.src_top, self.src_left))

demangle_luma

demangle_luma(mangled: VideoNode, y_base: VideoNode) -> VideoNode
Source code
718
719
720
721
722
def demangle_luma(self, mangled: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    src_left, self.src_left = self.src_left, self.src_left - 0.25
    luma = self.demangle_chroma(mangled, y_base)
    self.src_left = src_left
    return luma

get_base_clip

get_base_clip(clip: VideoNode) -> VideoNode
Source code
695
696
697
698
699
700
701
702
703
704
705
706
def get_base_clip(self, clip: vs.VideoNode) -> vs.VideoNode:
    if self.native_res is None:
        return self._kernel.resample(clip, vs.YUV444PS)

    de_args = ScalingArgs.from_args(clip, self.native_res)

    descale = self._native_kernel.descale(clip, de_args.width, de_args.height, **de_args.kwargs())

    return join(
        self._kernel.shift(descale, de_args.src_top / 2, -de_args.src_left / 2),
        self._scaler.scale(clip, de_args.width, de_args.height, format=vs.YUV444PS),
    )

get_chroma_shift

get_chroma_shift(y_width: int, c_width: int) -> float
Source code
544
545
def get_chroma_shift(self, y_width: int, c_width: int) -> float:
    return 0.5 * c_width / y_width

get_mangled_luma

get_mangled_luma(clip: VideoNode, y_base: VideoNode) -> VideoNode
Source code
708
709
710
711
def get_mangled_luma(self, clip: vs.VideoNode, y_base: vs.VideoNode) -> vs.VideoNode:
    c_width, c_height = get_plane_sizes(clip, 1)

    return Catrom().scale(y_base, c_width, c_height, (0, -0.5 + self.get_chroma_shift(clip.width, c_width)))

reconstruct

reconstruct(
    clip: VideoNode,
    sigma: float = 1.5,
    radius: int = 2,
    diff_mode: ReconDiffMode | ReconDiffModeConf = MEDIAN,
    out_mode: ReconOutput | bool | None = i444,
    include_edges: bool = True,
    lin_cutoff: float = 0.0,
    **kwargs: Any
) -> VideoNode
Source code
848
849
850
851
852
853
854
855
856
857
858
859
860
@inject_self.init_kwargs
def reconstruct(
    self,
    clip: vs.VideoNode,
    sigma: float = 1.5,
    radius: int = 2,
    diff_mode: ReconDiffMode | ReconDiffModeConf = ReconDiffMode.MEDIAN,
    out_mode: ReconOutput | bool | None = ReconOutput.i444,
    include_edges: bool = True,
    lin_cutoff: float = 0.0,
    **kwargs: Any,
) -> vs.VideoNode:
    return super().reconstruct(clip, sigma, radius, diff_mode, out_mode, include_edges, lin_cutoff)

ReconDiffMode

Bases: CustomStrEnum

Enum for configuring a reconstruction difference mode.

Methods:

  • __call__

    Configure the current mode. **It will not have any effect with SIMPLE.

Attributes:

  • BOOSTX

    Demangled chroma * luma diff + regressed diff merge. Pay attention to overshoot.

  • BOOSTY

    Demangled chroma + regressed diff * luma diff merge. Pay attention to overshoot.

  • MEAN

    Simple mean of SIMPLE, BOOSTX, and BOOSTY. Will give a dampened output.

  • MEDIAN

    The most complex merge available, combining all other modes while still

  • SIMPLE

    Simple demangled chroma + regressed diff merge. It is the most simple merge available.

BOOSTX class-attribute instance-attribute

BOOSTX = 'x z * y +'

Demangled chroma * luma diff + regressed diff merge. Pay attention to overshoot.

BOOSTY class-attribute instance-attribute

BOOSTY = 'x y z * +'

Demangled chroma + regressed diff * luma diff merge. Pay attention to overshoot.

MEAN class-attribute instance-attribute

MEAN = f'{SIMPLE} x z * y z / + + 2 /'

Simple mean of SIMPLE, BOOSTX, and BOOSTY. Will give a dampened output.

MEDIAN class-attribute instance-attribute

MEDIAN = f"{MEAN} AX! {BOOSTX} BX! {BOOSTY} CX! a BX@ - abs BD! a AX@ - abs BD@ < AX@ BD@ a CX@ - abs > BX@ CX@ ? ?"

The most complex merge available, combining all other modes while still avoiding overshoots and undershoots while retaining the sharpness.

SIMPLE class-attribute instance-attribute

SIMPLE = 'x y +'

Simple demangled chroma + regressed diff merge. It is the most simple merge available.

__call__

__call__(
    diff_sigma: float = 0.5, inter_scale: float = 0.0
) -> ReconDiffModeConf

Configure the current mode. **It will not have any effect with SIMPLE.

Parameters:

  • diff_sigma

    (float, default: 0.5 ) –

    Gaussian blur sigma for the luma-mangled luma difference.

  • inter_scale

    (float, default: 0.0 ) –

    Scaling for using the luma-chroma difference intercept.

    • = 0.0 => Disable usage of intercept.
    • < 20.0 => Will amplify and overshoot/undershoot all bright/dark spots. Not recommended.
    • < 50.0 => Will dampen haloing and normalize chroma to luma, removing eventual bleeding.
    • > 100.0 => Placebo effect.

Returns:

Source code
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
def __call__(self, diff_sigma: float = 0.5, inter_scale: float = 0.0) -> ReconDiffModeConf:
    """
    Configure the current mode. **It will not have any effect with ``SIMPLE``.

    Args:
        diff_sigma: Gaussian blur sigma for the luma-mangled luma difference.
        inter_scale: Scaling for using the luma-chroma difference intercept.

               - ``= 0.0``   => Disable usage of intercept.
               - ``< 20.0``  => Will amplify and overshoot/undershoot all bright/dark spots. Not recommended.
               - ``< 50.0``  => Will dampen haloing and normalize chroma to luma, removing eventual bleeding.
               - ``> 100.0`` => Placebo effect.

    Returns:
        Configured mode.
    """
    return ReconDiffModeConf(self, diff_sigma, inter_scale)

ReconDiffModeConf dataclass

ReconDiffModeConf(mode: ReconDiffMode, diff_sigma: float, inter_scale: float)

Internal structure.

Attributes:

diff_sigma instance-attribute

diff_sigma: float

inter_scale instance-attribute

inter_scale: float

mode instance-attribute

ReconOutput

Bases: CustomIntEnum

Enum to decide what combination of luma-chroma to output in ChromaReconstruct

Methods:

Attributes:

  • NATIVE

    Return 4:4:4 with luma from get_base_clip and reconstructed chroma.

  • i420

    Return 4:2:0 chroma as per input clip and reconstructed chroma downscaled/upscaled to fit the subsampling.

  • i444

    Return 4:4:4 chroma as per input clip and reconstructed chroma downscaled/upscaled to fit the subsampling.

NATIVE class-attribute instance-attribute

NATIVE = 0

Return 4:4:4 with luma from get_base_clip and reconstructed chroma. If for example your anime is native 720p, it will output the descaled luma in get_base_clip with 720p reconstructed chroma.

i420 class-attribute instance-attribute

i420 = 1

Return 4:2:0 chroma as per input clip and reconstructed chroma downscaled/upscaled to fit the subsampling.

i444 class-attribute instance-attribute

i444 = 2

Return 4:4:4 chroma as per input clip and reconstructed chroma downscaled/upscaled to fit the subsampling.

from_param classmethod

from_param(
    value: int | ReconOutput | bool | None,
    func_except: FuncExceptT | None = None,
) -> ReconOutput
Source code
407
408
409
410
411
412
413
414
@classmethod
def from_param(cls, value: int | ReconOutput | bool | None, func_except: FuncExceptT | None = None) -> ReconOutput:
    if isinstance(value, bool):
        value = 1 + int(value)
    elif value is None:
        return cls.NATIVE

    return super().from_param(value, func_except)  # type: ignore

Regression dataclass

Regression(
    blur_func: BlurConf | VSFunction[VideoNode] = BlurConf(box_blur, radius=2),
    eps: float = 1e-07,
)

Class for math operation on a clip.

For more info see this Wikipedia article <https://en.wikipedia.org/wiki/Regression_analysis>_.

Classes:

  • BlurConf

    Class for the blur (or averaging filter) used for regression.

  • Linear

    Representation of a Linear Regression.

Methods:

  • from_param

    Get a Regression from generic parameters.

  • linear

    Perform a simple linear regression.

  • sloped_corr

    Compute correlation of slopes of a simple regression.

Attributes:

blur_func class-attribute instance-attribute

blur_func: BlurConf | VSFunction[VideoNode] = BlurConf(box_blur, radius=2)

Function used for blurring (averaging).

eps class-attribute instance-attribute

eps: float = 1e-07

Epsilon, used in expressions to avoid division by zero.

BlurConf

BlurConf(
    func: Callable[Concatenate[VideoNode, P], VideoNode],
    /,
    *args: args,
    **kwargs: kwargs,
)

Class for the blur (or averaging filter) used for regression.

Parameters:

  • func

    (Callable[Concatenate[VideoNode, P], VideoNode]) –

    Function used for blurring.

  • *args

    (args, default: () ) –

    Positional arguments passed to the function.

  • **kwargs

    (kwargs, default: {} ) –

    Keyword arguments passed to the function.

Methods:

  • __call__

    Blur a clip with the current config.

  • blur

    Blur a clip with the current config.

  • extend

    Extend the current config arguments and get a new BlurConf object.

  • from_param

    Get a BlurConf from generic parameters.

  • get_bases

    Get the base elements for a regression.

Attributes:

Source code
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
def __init__(
    self, func: Callable[Concatenate[vs.VideoNode, P], vs.VideoNode], /, *args: P.args, **kwargs: P.kwargs
) -> None:
    """
    Args:
        func: Function used for blurring.
        *args: Positional arguments passed to the function.
        **kwargs: Keyword arguments passed to the function.
    """

    self.func = func
    self.args = args
    self.kwargs = kwargs

args instance-attribute

args = args

func instance-attribute

func = func

kwargs instance-attribute

kwargs = kwargs

__call__

__call__(
    clip: VideoNode, chroma_only: bool = False, *args: Any, **kwargs: Any
) -> VideoNode

Blur a clip with the current config.

Parameters:

  • clip
    (VideoNode) –

    Clip to be blurred.

  • chroma_only
    (bool, default: False ) –

    Try only processing chroma.

  • *args
    (Any, default: () ) –

    Positional arguments passed to the function.

  • **kwargs
    (Any, default: {} ) –

    Keyword arguments passed to the function.

Returns:

  • VideoNode

    Blurred clip.

Source code
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
def __call__(self, clip: vs.VideoNode, chroma_only: bool = False, *args: Any, **kwargs: Any) -> vs.VideoNode:
    """
    Blur a clip with the current config.

    Args:
        clip: Clip to be blurred.
        chroma_only: Try only processing chroma.
        *args: Positional arguments passed to the function.
        **kwargs: Keyword arguments passed to the function.

    Returns:
        Blurred clip.
    """

    if not args:
        args = self.args

    kwargs = self.kwargs | kwargs

    out = None

    if chroma_only:
        ckwargs = kwargs | {"planes": [1, 2]}

        key = complex_hash.hash(args, ckwargs)

        got_result = _cached_blurs.get(key, None)

        if got_result is not None:
            for inc, outc in got_result:
                if inc == clip:
                    return outc

        with contextlib.suppress(Exception):
            out = self.func(clip, *args, **ckwargs)  # type: ignore[arg-type]

    if not out:
        key = complex_hash.hash(args, kwargs)

        got_result = _cached_blurs.get(key, None)

        if got_result is not None:
            for inc, outc in got_result:
                if inc == clip:
                    return outc

        out = self.func(clip, *args, **kwargs)  # type: ignore[arg-type]

    if key not in _cached_blurs:
        _cached_blurs[key] = []

    _cached_blurs[key].append((clip, out))

    return out

blur

blur(
    clip: VideoNode, chroma_only: bool = False, *args: Any, **kwargs: Any
) -> Any

Blur a clip with the current config.

Parameters:

  • clip
    (VideoNode) –

    Clip to be blurred.

  • chroma_only
    (bool, default: False ) –

    Try only processing chroma.

  • *args
    (Any, default: () ) –

    Positional arguments passed to the function.

  • **kwargs
    (Any, default: {} ) –

    Keyword arguments passed to the function.

Returns:

  • Any

    Blurred clip.

Source code
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
def blur(self, clip: vs.VideoNode, chroma_only: bool = False, *args: Any, **kwargs: Any) -> Any:
    """
    Blur a clip with the current config.

    Args:
        clip: Clip to be blurred.
        chroma_only: Try only processing chroma.
        *args: Positional arguments passed to the function.
        **kwargs: Keyword arguments passed to the function.

    Returns:
        Blurred clip.
    """

    return self(clip, chroma_only, *args, **kwargs)

extend

extend(*args: Any, **kwargs: Any) -> BlurConf

Extend the current config arguments and get a new BlurConf object.

Parameters:

  • *args
    (Any, default: () ) –

    Positional arguments passed to the function.

  • **kwargs
    (Any, default: {} ) –

    Keyword arguments passed to the function.

Returns:

Source code
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
def extend(self, *args: Any, **kwargs: Any) -> Regression.BlurConf:
    """
    Extend the current config arguments and get a new BlurConf object.

    Args:
        *args: Positional arguments passed to the function.
        **kwargs: Keyword arguments passed to the function.

    Returns:
        BlurConf object.
    """
    if args or kwargs:
        return Regression.BlurConf(
            self.func,
            *(args or self.args),
            **(self.kwargs | kwargs),  # type: ignore[arg-type]
        )
    return self

from_param classmethod

from_param(
    func: Callable[Concatenate[VideoNode, P1], VideoNode] | BlurConf,
    *args: args,
    **kwargs: kwargs
) -> BlurConf

Get a BlurConf from generic parameters.

Parameters:

  • func
    (Callable[Concatenate[VideoNode, P1], VideoNode] | BlurConf) –

    Function used for blurring or already existing config.

  • *args
    (args, default: () ) –

    Positional arguments passed to the function.

  • **kwargs
    (kwargs, default: {} ) –

    Keyword arguments passed to the function.

Returns:

Source code
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
@classmethod
def from_param(
    cls,
    func: Callable[Concatenate[vs.VideoNode, P1], vs.VideoNode] | Regression.BlurConf,
    *args: P1.args,
    **kwargs: P1.kwargs,
) -> Regression.BlurConf:
    """
    Get a BlurConf from generic parameters.

    Args:
        func: Function used for blurring or already existing config.
        *args: Positional arguments passed to the function.
        **kwargs: Keyword arguments passed to the function.

    Returns:
        BlurConf object.
    """

    if isinstance(func, Regression.BlurConf):
        return func.extend(*args, **kwargs)

    return Regression.BlurConf(func, *args, **kwargs)

get_bases

get_bases(
    clip: VideoNode | Sequence[VideoNode],
) -> tuple[Sequence[VideoNode], Sequence[VideoNode], Sequence[VideoNode]]

Get the base elements for a regression.

Parameters:

  • clip
    (VideoNode | Sequence[VideoNode]) –

    Clip or individual planes to be processed.

Returns:

  • tuple[Sequence[VideoNode], Sequence[VideoNode], Sequence[VideoNode]]

    Tuple containing the blurred clips, variations, and relation of the two.

Source code
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
def get_bases(
    self, clip: vs.VideoNode | Sequence[vs.VideoNode]
) -> tuple[Sequence[vs.VideoNode], Sequence[vs.VideoNode], Sequence[vs.VideoNode]]:
    """
    Get the base elements for a regression.

    Args:
        clip: Clip or individual planes to be processed.

    Returns:
        Tuple containing the blurred clips, variations, and relation of the two.
    """

    planes = clip if isinstance(clip, Sequence) else split(clip)

    blur = [self(shifted) for shifted in planes]

    variation = [
        norm_expr([Ex, self(ExprOp.MUL.combine(shifted, suffix=ExprOp.DUP))], "y x dup * - 0 max", func=self)
        for Ex, shifted in zip(blur, planes)
    ]

    var_mul = [self(ExprOp.MUL.combine(planes[0], shifted_y)) for shifted_y in planes[1:]]

    return blur, variation, var_mul

Linear dataclass

Linear(slope: VideoNode, intercept: VideoNode, correlation: VideoNode)

Representation of a Linear Regression.

For more info see this Wikipedia article <https://en.wikipedia.org/wiki/Linear_regression>_.

Attributes:

  • correlation (VideoNode) –

    The relationship between the error term and the regressors.

  • intercept (VideoNode) –

    Component of slope, the intercept term.

  • slope (VideoNode) –

    One of the regression coefficients.

correlation instance-attribute

correlation: VideoNode

The relationship between the error term and the regressors.

intercept instance-attribute

intercept: VideoNode

Component of slope, the intercept term.

slope instance-attribute

slope: VideoNode

One of the regression coefficients.

In simple linear regression the coefficient is the regression slope.

from_param classmethod

from_param(
    func: Callable[Concatenate[VideoNode, P1], VideoNode] | BlurConf,
    *args: args,
    **kwargs: kwargs
) -> Regression

Get a Regression from generic parameters.

Parameters:

  • func

    (Callable[Concatenate[VideoNode, P1], VideoNode] | BlurConf) –

    Function used for blurring or a preconfigured BlurConf.

  • *args

    (args, default: () ) –

    Positional arguments passed to the blurring function.

  • **kwargs

    (kwargs, default: {} ) –

    Keyword arguments passed to the blurring function.

Returns:

Source code
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
@classmethod
def from_param(
    cls,
    func: Callable[Concatenate[vs.VideoNode, P1], vs.VideoNode] | Regression.BlurConf,
    *args: P1.args,
    **kwargs: P1.kwargs,
) -> Regression:
    """
    Get a Regression from generic parameters.

    Args:
        func: Function used for blurring or a preconfigured BlurConf.
        *args: Positional arguments passed to the blurring function.
        **kwargs: Keyword arguments passed to the blurring function.

    Returns:
        Regression object.
    """

    return Regression(Regression.BlurConf.from_param(func, *args, **kwargs))

linear

linear(
    clip: VideoNode | Sequence[VideoNode],
    weight: float = 0.0,
    intercept_scale: float = 50.0,
    *args: Any,
    **kwargs: Any
) -> list[Linear]

Perform a simple linear regression.

Parameters:

  • clip

    (VideoNode | Sequence[VideoNode]) –

    Clip or singular planes to be processed.

  • *args

    (Any, default: () ) –

    Positional arguments passed to the blurring function.

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments passed to the blurring function.

Returns:

  • list[Linear]

    List of a Regression.Linear object for each plane.

Source code
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
def linear(
    self,
    clip: vs.VideoNode | Sequence[vs.VideoNode],
    weight: float = 0.0,
    intercept_scale: float = 50.0,
    *args: Any,
    **kwargs: Any,
) -> list[Regression.Linear]:
    """
    Perform a simple linear regression.

    Args:
        clip: Clip or singular planes to be processed.
        *args: Positional arguments passed to the blurring function.
        **kwargs: Keyword arguments passed to the blurring function.

    Returns:
        List of a Regression.Linear object for each plane.
    """

    blur_conf = self.blur_conf.extend(*args, **kwargs)

    (blur_x, *blur_ys), (var_x, *var_ys), var_mul = blur_conf.get_bases(clip)

    if weight < 0.0 or weight >= 1.0:
        raise CustomOverflowError(
            '"weight" must be between 0.0 and 1.0 (exclusive)!', self.__class__.linear, weight
        )

    cov_xys = [
        norm_expr([vm_y, blur_x, Ey], "x y z * -", func=self.__class__.linear) for vm_y, Ey in zip(var_mul, blur_ys)
    ]

    slopes = [norm_expr([cov_xy, var_x], f"x y {self.eps} + /", func=self.__class__.linear) for cov_xy in cov_xys]

    scale_str = f"{intercept_scale} /" if intercept_scale != 0 else ""
    intercepts = [
        norm_expr([blur_y, slope, blur_x], f"x y z * - {scale_str}", func=self.__class__.linear)
        for blur_y, slope in zip(blur_ys, slopes)
    ]

    weight_str = f"{1 - weight} - {weight} / dup 0 > swap 0 ?" if weight > 0.0 else ""

    corrs = [
        norm_expr(
            [cov_xy, var_x, var_y], f"x dup * y z * {self.eps} + / sqrt {weight_str}", func=self.__class__.linear
        )
        for cov_xy, var_y in zip(cov_xys, var_ys)
    ]

    return [
        Regression.Linear(slope, intercept, correlation)
        for slope, intercept, correlation in zip(slopes, intercepts, corrs)
    ]

sloped_corr

sloped_corr(
    clip: VideoNode | Sequence[VideoNode],
    weight: float = 0.5,
    avg: bool = False,
    *args: Any,
    **kwargs: Any
) -> Sequence[VideoNode]

Compute correlation of slopes of a simple regression.

Parameters:

  • clip

    (VideoNode | Sequence[VideoNode]) –

    Clip or individual planes to be processed.

  • avg

    (bool, default: False ) –

    Average (blur) the final result.

  • *args

    (Any, default: () ) –

    Positional arguments passed to the blurring function.

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments passed to the blurring function.

Returns:

  • Sequence[VideoNode]

    List of clips representing the correlation of slopes.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
def sloped_corr(
    self,
    clip: vs.VideoNode | Sequence[vs.VideoNode],
    weight: float = 0.5,
    avg: bool = False,
    *args: Any,
    **kwargs: Any,
) -> Sequence[vs.VideoNode]:
    """
    Compute correlation of slopes of a simple regression.

    Args:
        clip: Clip or individual planes to be processed.
        avg: Average (blur) the final result.
        *args: Positional arguments passed to the blurring function.
        **kwargs: Keyword arguments passed to the blurring function.

    Returns:
        List of clips representing the correlation of slopes.
    """

    blur_conf = self.blur_conf.extend(*args, **kwargs)

    (blur_x, *blur_ys), (var_x, *var_ys), var_mul = blur_conf.get_bases(clip)

    if weight < 0.0 or weight >= 1.0:
        raise CustomOverflowError(
            '"weight" must be between 0.0 and 1.0 (exclusive)!', self.__class__.sloped_corr, weight
        )

    coeff_x, coeff_y = weight, 1.0 - weight

    weight_str = f"{coeff_x} - {coeff_y} / 0 max" if coeff_x else ""

    corr_slopes = [
        norm_expr(
            [Exys_y, blur_x, Ex_y, var_x, var_y],
            f"x y z * - XYS! XYS@ a {self.eps} + / XYS@ dup * a b * {self.eps} + / sqrt {weight_str} *",
            func=self.__class__.sloped_corr,
        )
        if True  # complexpr_available
        else norm_expr(
            [norm_expr([Exys_y, blur_x, Ex_y], "x y z * -"), var_x, var_y],
            f"x y {self.eps} + / x dup * y z * {self.eps} + / sqrt {weight_str} *",
            func=self.__class__.sloped_corr,
        )
        for Exys_y, Ex_y, var_y in zip(var_mul, blur_ys, var_ys)
    ]

    if not avg:
        return corr_slopes

    return [blur_conf(corr_slope) for corr_slope in corr_slopes]