Skip to content

deinterlacers

Classes:

  • AntiAliaser

    Abstract base class for anti-aliasing operations.

  • BWDIF

    Motion adaptive deinterlacing based on yadif with the use of w3fdif and cubic interpolation algorithms.

  • Deinterlacer

    Abstract base class for deinterlacing operations.

  • EEDI2

    Enhanced Edge Directed Interpolation (2nd gen.)

  • EEDI3

    Enhanced Edge Directed Interpolation (3rd gen.)

  • NNEDI3

    Neural Network Edge Directed Interpolation (3rd gen.)

  • SangNom

    SangNom single field deinterlacer using edge-directed interpolation

  • SuperSampler

    Abstract base class for supersampling operations.

  • SuperSamplerProcess

    A utility SuperSampler class that applies a given function to a supersampled clip,

AntiAliaser dataclass

AntiAliaser(
    *,
    tff: bool | None = None,
    double_rate: bool = True,
    transpose_first: bool = False
)

Bases: Deinterlacer, ABC

Abstract base class for anti-aliasing operations.

Classes:

  • AADirection

    Enum representing the direction(s) in which anti-aliasing should be applied.

Methods:

  • antialias

    Apply anti-aliasing to the given clip.

  • bob

    Apply bob deinterlacing to the given clip.

  • copy

    Returns a new Antialiaser class replacing specified fields with new values

  • deinterlace

    Apply deinterlacing to the given clip.

  • get_deint_args

    Retrieves arguments for deinterlacing processing.

  • transpose

    Transpose the input clip by swapping its horizontal and vertical axes.

Attributes:

double_rate class-attribute instance-attribute

double_rate: bool = True

Whether to double the FPS.

tff class-attribute instance-attribute

tff: bool | None = None

The field order.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

AADirection

Bases: IntFlag

Enum representing the direction(s) in which anti-aliasing should be applied.

Attributes:

  • BOTH

    Apply anti-aliasing in both horizontal and vertical directions.

  • HORIZONTAL

    Apply anti-aliasing in the horizontal direction.

  • VERTICAL

    Apply anti-aliasing in the vertical direction.

BOTH class-attribute instance-attribute

Apply anti-aliasing in both horizontal and vertical directions.

HORIZONTAL class-attribute instance-attribute

HORIZONTAL = auto()

Apply anti-aliasing in the horizontal direction.

VERTICAL class-attribute instance-attribute

VERTICAL = auto()

Apply anti-aliasing in the vertical direction.

antialias

antialias(
    clip: VideoNode, direction: AADirection = BOTH, **kwargs: Any
) -> VideoNode

Apply anti-aliasing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • direction

    (AADirection, default: BOTH ) –

    Direction in which to apply anti-aliasing. Defaults to AADirection.BOTH.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Anti-aliased clip.

Source code in vsaa/deinterlacers.py
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
def antialias(self, clip: vs.VideoNode, direction: AADirection = AADirection.BOTH, **kwargs: Any) -> vs.VideoNode:
    """
    Apply anti-aliasing to the given clip.

    Args:
        clip: The input clip.
        direction: Direction in which to apply anti-aliasing. Defaults to AADirection.BOTH.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Anti-aliased clip.
    """
    tff = fallback(kwargs.pop("tff", self.tff), True)

    for y in sorted(self.AADirection, key=lambda x: x.value, reverse=self.transpose_first):
        if direction in (y, self.AADirection.BOTH):
            if y == self.AADirection.HORIZONTAL:
                clip, tclips = self.transpose(clip, **kwargs)
                kwargs |= tclips

            clip = self._interpolate(clip, tff, self.double_rate, False, **kwargs)

            if self.double_rate:
                clip = core.std.Merge(clip[::2], clip[1::2])

            if y == self.AADirection.HORIZONTAL:
                clip, tclips = self.transpose(clip, **kwargs)
                kwargs |= tclips

    return clip

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> VideoNode

Apply bob deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def bob(self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any) -> vs.VideoNode:
    """
    Apply bob deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, True, False, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code in vsaa/deinterlacers.py
124
125
126
127
128
def copy(self, **kwargs: Any) -> Self:
    """
    Returns a new Antialiaser class replacing specified fields with new values
    """
    return replace(self, **kwargs)

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any
) -> VideoNode

Apply deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool | None, default: None ) –

    Whether to double the frame rate (True) or retain the original rate (False).

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
def deinterlace(
    self,
    clip: vs.VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Apply deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, fallback(double_rate, self.double_rate), False, **kwargs)

get_deint_args abstractmethod

get_deint_args(**kwargs: Any) -> dict[str, Any]

Retrieves arguments for deinterlacing processing.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code in vsaa/deinterlacers.py
130
131
132
133
134
135
136
137
138
139
140
141
@abstractmethod
def get_deint_args(self, **kwargs: Any) -> dict[str, Any]:
    """
    Retrieves arguments for deinterlacing processing.

    Args:
        **kwargs: Additional arguments.

    Returns:
        Passed keyword arguments.
    """
    return kwargs

transpose

transpose(
    clip: VideoNode, **kwargs: Any
) -> tuple[VideoNode, Mapping[str, VideoNode | None]]

Transpose the input clip by swapping its horizontal and vertical axes.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

Returns:

  • tuple[VideoNode, Mapping[str, VideoNode | None]]

    The transposed clip.

Source code in vsaa/deinterlacers.py
229
230
231
232
233
234
235
236
237
238
239
def transpose(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[vs.VideoNode, Mapping[str, vs.VideoNode | None]]:
    """
    Transpose the input clip by swapping its horizontal and vertical axes.

    Args:
        clip: The input clip.

    Returns:
        The transposed clip.
    """
    return clip.std.Transpose(), {}

BWDIF dataclass

BWDIF(
    edeint: VideoNode | Deinterlacer | VSFunctionNoArgs | None = None,
    *,
    tff: bool | None = None,
    double_rate: bool = True
)

Bases: Deinterlacer

Motion adaptive deinterlacing based on yadif with the use of w3fdif and cubic interpolation algorithms.

Methods:

  • bob

    Apply bob deinterlacing to the given clip.

  • copy

    Returns a new Antialiaser class replacing specified fields with new values

  • deinterlace

    Apply deinterlacing to the given clip.

  • get_deint_args

    Retrieves arguments for deinterlacing processing.

Attributes:

double_rate class-attribute instance-attribute

double_rate: bool = True

Whether to double the FPS.

edeint class-attribute instance-attribute

edeint: VideoNode | Deinterlacer | VSFunctionNoArgs | None = None

Allows the specification of an external clip from which to take spatial predictions instead of having Bwdif use cubic interpolation.

This clip must be the same width, height, and colorspace as the input clip.

If using same rate output, this clip should have the same number of frames as the input. If using double rate output, this clip should have twice as many frames as the input.

tff class-attribute instance-attribute

tff: bool | None = None

The field order.

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> VideoNode

Apply bob deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def bob(self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any) -> vs.VideoNode:
    """
    Apply bob deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, True, False, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code in vsaa/deinterlacers.py
124
125
126
127
128
def copy(self, **kwargs: Any) -> Self:
    """
    Returns a new Antialiaser class replacing specified fields with new values
    """
    return replace(self, **kwargs)

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any
) -> VideoNode

Apply deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool | None, default: None ) –

    Whether to double the frame rate (True) or retain the original rate (False).

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
def deinterlace(
    self,
    clip: vs.VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Apply deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, fallback(double_rate, self.double_rate), False, **kwargs)

get_deint_args

get_deint_args(**kwargs: Any) -> dict[str, Any]

Retrieves arguments for deinterlacing processing.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code in vsaa/deinterlacers.py
854
855
def get_deint_args(self, **kwargs: Any) -> dict[str, Any]:
    return {"edeint": self.edeint} | kwargs

Deinterlacer dataclass

Deinterlacer(*, tff: bool | None = None, double_rate: bool = True)

Bases: Bobber, ABC

Abstract base class for deinterlacing operations.

Methods:

  • bob

    Apply bob deinterlacing to the given clip.

  • copy

    Returns a new Antialiaser class replacing specified fields with new values

  • deinterlace

    Apply deinterlacing to the given clip.

  • get_deint_args

    Retrieves arguments for deinterlacing processing.

Attributes:

double_rate class-attribute instance-attribute

double_rate: bool = True

Whether to double the FPS.

tff class-attribute instance-attribute

tff: bool | None = None

The field order.

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> VideoNode

Apply bob deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def bob(self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any) -> vs.VideoNode:
    """
    Apply bob deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, True, False, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code in vsaa/deinterlacers.py
124
125
126
127
128
def copy(self, **kwargs: Any) -> Self:
    """
    Returns a new Antialiaser class replacing specified fields with new values
    """
    return replace(self, **kwargs)

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any
) -> VideoNode

Apply deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool | None, default: None ) –

    Whether to double the frame rate (True) or retain the original rate (False).

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
def deinterlace(
    self,
    clip: vs.VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Apply deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, fallback(double_rate, self.double_rate), False, **kwargs)

get_deint_args abstractmethod

get_deint_args(**kwargs: Any) -> dict[str, Any]

Retrieves arguments for deinterlacing processing.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code in vsaa/deinterlacers.py
130
131
132
133
134
135
136
137
138
139
140
141
@abstractmethod
def get_deint_args(self, **kwargs: Any) -> dict[str, Any]:
    """
    Retrieves arguments for deinterlacing processing.

    Args:
        **kwargs: Additional arguments.

    Returns:
        Passed keyword arguments.
    """
    return kwargs

DeinterlacerKwargs

Bases: dict[str, Any]

A dict-like wrapper that syncs keys with a Deinterlacer instance.

  • If a key matches an attribute of deinterlacer, the value is set on the object instead of stored in the dict.
  • Otherwise, the pair is stored normally.

update() and setdefault() are overridden to respect this behavior.

Methods:

Attributes:

deinterlacer instance-attribute

deinterlacer: Deinterlacer

Deinterlacer object.

setdefault

setdefault(key: str, default: Any = None) -> Any
Source code in vsaa/deinterlacers.py
61
62
63
64
65
@copy_signature(dict[str, Any].setdefault)
def setdefault(self, key: str, default: Any = None) -> Any:
    if key not in self:
        self[key] = default
    return self[key]

update

update(*args: Any, **kwargs: Any) -> None
Source code in vsaa/deinterlacers.py
56
57
58
59
@copy_signature(dict[str, Any].update)
def update(self, *args: Any, **kwargs: Any) -> None:
    for k, v in dict(*args, **kwargs).items():
        self[k] = v

EEDI2 dataclass

EEDI2(
    mthresh: int = 10,
    lthresh: int = 20,
    vthresh: int = 20,
    estr: int = 2,
    dstr: int = 4,
    maxd: int = 24,
    map: int = 0,
    nt: int = 50,
    pp: int = 1,
    *,
    tff: bool | None = None,
    double_rate: bool = True,
    transpose_first: bool = False,
    scaler: ComplexScalerLike = Catrom,
    noshift: bool | Sequence[bool] = False
)

Bases: SuperSampler

Enhanced Edge Directed Interpolation (2nd gen.)

Classes:

  • AADirection

    Enum representing the direction(s) in which anti-aliasing should be applied.

Methods:

  • antialias

    Apply anti-aliasing to the given clip.

  • bob

    Apply bob deinterlacing to the given clip.

  • copy

    Returns a new Antialiaser class replacing specified fields with new values

  • deinterlace

    Apply deinterlacing to the given clip.

  • get_deint_args

    Retrieves arguments for deinterlacing processing.

  • kernel_radius
  • scale

    Scale the given clip using super sampling method.

  • supersample

    Supersample a clip by a given scaling factor.

  • transpose

    Transpose the input clip by swapping its horizontal and vertical axes.

Attributes:

  • double_rate (bool) –

    Whether to double the FPS.

  • dstr (int) –

    Defines the required number of edge pixels (>=) in a 3x3 area, in which the center pixel

  • estr (int) –

    Defines the required number of edge pixels (<=) in a 3x3 area, in which the center pixel

  • lthresh (int) –

    Controls the Laplacian threshold used in edge detection.

  • map (int) –

    Allows one of three possible maps to be shown:

  • maxd (int) –

    Sets the maximum pixel search distance for determining the interpolation direction.

  • mthresh (int) –

    Controls the edge magnitude threshold used in edge detection for building the initial edge map.

  • noshift (bool | Sequence[bool]) –

    Disables sub-pixel shifting after supersampling.

  • nt (int) –

    Defines the noise threshold between pixels in the sliding vectors.

  • pp (int) –

    Enables two optional post-processing modes designed to reduce artifacts by identifying problem areas

  • scaler (ComplexScalerLike) –

    Scaler used for downscaling and shifting after supersampling.

  • tff (bool | None) –

    The field order.

  • transpose_first (bool) –

    Transpose the clip before any operation.

  • vthresh (int) –

    Controls the variance threshold used in edge detection.

double_rate class-attribute instance-attribute

double_rate: bool = True

Whether to double the FPS.

dstr class-attribute instance-attribute

dstr: int = 4

Defines the required number of edge pixels (>=) in a 3x3 area, in which the center pixel has not been detected as an edge pixel, for the center pixel to be added to the edge map.

estr class-attribute instance-attribute

estr: int = 2

Defines the required number of edge pixels (<=) in a 3x3 area, in which the center pixel has been detected as an edge pixel, for the center pixel to be removed from the edge map.

lthresh class-attribute instance-attribute

lthresh: int = 20

Controls the Laplacian threshold used in edge detection. Its range is from 0 to 510, with lower values detecting weaker lines.

map class-attribute instance-attribute

map: int = 0

Allows one of three possible maps to be shown: - 0 = no map - 1 = edge map (Edge pixels will be set to 255 and non-edge pixels will be set to 0) - 2 = original scale direction map - 3 = 2x scale direction map

maxd class-attribute instance-attribute

maxd: int = 24

Sets the maximum pixel search distance for determining the interpolation direction. Larger values allow the algorithm to connect edges and lines with smaller slopes but may introduce artifacts. In some cases, using a smaller maxd value can yield better results than a larger one. The maximum possible value for maxd is 29.

mthresh class-attribute instance-attribute

mthresh: int = 10

Controls the edge magnitude threshold used in edge detection for building the initial edge map. Its range is from 0 to 255, with lower values detecting weaker edges.

noshift class-attribute instance-attribute

noshift: bool | Sequence[bool] = False

Disables sub-pixel shifting after supersampling.

  • bool: Applies to both luma and chroma.
  • Sequence[bool]: First for luma, second for chroma.

nt class-attribute instance-attribute

nt: int = 50

Defines the noise threshold between pixels in the sliding vectors. This value is used to determine initial starting conditions. Lower values typically reduce artifacts but may degrade edge reconstruction, while higher values can enhance edge reconstruction at the cost of introducing more artifacts. The valid range is from 0 to 256.

pp class-attribute instance-attribute

pp: int = 1

Enables two optional post-processing modes designed to reduce artifacts by identifying problem areas and applying simple vertical linear interpolation in those areas. While these modes can improve results, they may slow down processing and slightly reduce edge sharpness. - 0 = No post-processing - 1 = Check for spatial consistency of final interpolation directions - 2 = Check for junctions and corners - 3 = Apply both checks from 1 and 2

scaler class-attribute instance-attribute

Scaler used for downscaling and shifting after supersampling.

tff class-attribute instance-attribute

tff: bool | None = None

The field order.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

vthresh class-attribute instance-attribute

vthresh: int = 20

Controls the variance threshold used in edge detection. Its range is from 0 to a large number, with lower values detecting weaker edges.

AADirection

Bases: IntFlag

Enum representing the direction(s) in which anti-aliasing should be applied.

Attributes:

  • BOTH

    Apply anti-aliasing in both horizontal and vertical directions.

  • HORIZONTAL

    Apply anti-aliasing in the horizontal direction.

  • VERTICAL

    Apply anti-aliasing in the vertical direction.

BOTH class-attribute instance-attribute

Apply anti-aliasing in both horizontal and vertical directions.

HORIZONTAL class-attribute instance-attribute

HORIZONTAL = auto()

Apply anti-aliasing in the horizontal direction.

VERTICAL class-attribute instance-attribute

VERTICAL = auto()

Apply anti-aliasing in the vertical direction.

antialias

antialias(
    clip: VideoNode, direction: AADirection = BOTH, **kwargs: Any
) -> VideoNode

Apply anti-aliasing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • direction

    (AADirection, default: BOTH ) –

    Direction in which to apply anti-aliasing. Defaults to AADirection.BOTH.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Anti-aliased clip.

Source code in vsaa/deinterlacers.py
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
def antialias(self, clip: vs.VideoNode, direction: AADirection = AADirection.BOTH, **kwargs: Any) -> vs.VideoNode:
    """
    Apply anti-aliasing to the given clip.

    Args:
        clip: The input clip.
        direction: Direction in which to apply anti-aliasing. Defaults to AADirection.BOTH.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Anti-aliased clip.
    """
    tff = fallback(kwargs.pop("tff", self.tff), True)

    for y in sorted(self.AADirection, key=lambda x: x.value, reverse=self.transpose_first):
        if direction in (y, self.AADirection.BOTH):
            if y == self.AADirection.HORIZONTAL:
                clip, tclips = self.transpose(clip, **kwargs)
                kwargs |= tclips

            clip = self._interpolate(clip, tff, self.double_rate, False, **kwargs)

            if self.double_rate:
                clip = core.std.Merge(clip[::2], clip[1::2])

            if y == self.AADirection.HORIZONTAL:
                clip, tclips = self.transpose(clip, **kwargs)
                kwargs |= tclips

    return clip

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> VideoNode

Apply bob deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def bob(self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any) -> vs.VideoNode:
    """
    Apply bob deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, True, False, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code in vsaa/deinterlacers.py
124
125
126
127
128
def copy(self, **kwargs: Any) -> Self:
    """
    Returns a new Antialiaser class replacing specified fields with new values
    """
    return replace(self, **kwargs)

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any
) -> VideoNode

Apply deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool | None, default: None ) –

    Whether to double the frame rate (True) or retain the original rate (False).

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
def deinterlace(
    self,
    clip: vs.VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Apply deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, fallback(double_rate, self.double_rate), False, **kwargs)

get_deint_args

get_deint_args(**kwargs: Any) -> dict[str, Any]

Retrieves arguments for deinterlacing processing.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code in vsaa/deinterlacers.py
553
554
555
556
557
558
559
560
561
562
563
564
def get_deint_args(self, **kwargs: Any) -> dict[str, Any]:
    return {
        "mthresh": self.mthresh,
        "lthresh": self.lthresh,
        "vthresh": self.vthresh,
        "estr": self.estr,
        "dstr": self.dstr,
        "maxd": self.maxd,
        "map": self.map,
        "nt": self.nt,
        "pp": self.pp,
    } | kwargs

kernel_radius

kernel_radius() -> int
Source code in vsaa/deinterlacers.py
549
550
551
@Scaler.cachedproperty
def kernel_radius(self) -> int:
    return self.maxd

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using super sampling method.

Note: Setting tff=True results in less chroma shift for non-centered chroma locations.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the deinterlacing function.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsaa/deinterlacers.py
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using super sampling method.

    Note: Setting `tff=True` results in less chroma shift for non-centered chroma locations.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the deinterlacing function.

    Returns:
        The scaled clip.
    """
    tff_fallback = fallback(kwargs.pop("tff", self.tff), True)

    dims = self._wh_norm(clip, width, height)
    dest_dimensions = list(dims)
    sy, sx = shift

    cloc = list(ChromaLocation.from_video(clip).get_offsets(clip))
    subsampling = [2**clip.format.subsampling_w, 2**clip.format.subsampling_h]

    nshift: list[list[float]] = [
        normalize_seq(sx, clip.format.num_planes),
        normalize_seq(sy, clip.format.num_planes),
    ]

    if not self.transpose_first:
        dest_dimensions.reverse()
        cloc.reverse()
        subsampling.reverse()
        nshift.reverse()

    for x, dim in enumerate(dest_dimensions):
        is_width = (not x and self.transpose_first) or (not self.transpose_first and x)

        if is_width:
            clip, _ = self.transpose(clip)

        while clip.height < dim:
            delta = max(nshift[x], key=lambda y: abs(y))
            tff = False if delta < 0 else True if delta > 0 else tff_fallback
            offset = -0.25 if tff else 0.25

            for y in range(clip.format.num_planes):
                if not y:
                    nshift[x][y] = (nshift[x][y] + offset) * 2
                else:
                    nshift[x][y] = (nshift[x][y] + offset) * 2 - cloc[x] / subsampling[x]

            clip = self._interpolate(clip, tff, False, True, **kwargs)

        if is_width:
            clip, _ = self.transpose(clip)

    if not self.transpose_first:
        nshift.reverse()

    self._ss_shifts = nshift

    if self.noshift:
        noshift = normalize_seq(self.noshift, clip.format.num_planes)

        if all(noshift) and dims == (clip.width, clip.height):
            return clip

        for ns in nshift:
            for i in range(len(ns)):
                ns[i] *= not noshift[i]

    return ComplexScaler.ensure_obj(self.scaler, self.__class__).scale(clip, width, height, (nshift[1], nshift[0]))

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Note: Setting tff=True results in less chroma shift for non-centered chroma locations.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

Returns:

  • VideoNode

    The supersampled clip.

Source code in vsaa/deinterlacers.py
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Note: Setting `tff=True` results in less chroma shift for non-centered chroma locations.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    ...

transpose

transpose(
    clip: VideoNode, **kwargs: Any
) -> tuple[VideoNode, Mapping[str, VideoNode | None]]

Transpose the input clip by swapping its horizontal and vertical axes.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

Returns:

  • tuple[VideoNode, Mapping[str, VideoNode | None]]

    The transposed clip.

Source code in vsaa/deinterlacers.py
229
230
231
232
233
234
235
236
237
238
239
def transpose(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[vs.VideoNode, Mapping[str, vs.VideoNode | None]]:
    """
    Transpose the input clip by swapping its horizontal and vertical axes.

    Args:
        clip: The input clip.

    Returns:
        The transposed clip.
    """
    return clip.std.Transpose(), {}

EEDI3 dataclass

EEDI3(
    alpha: float = 0.2,
    beta: float = 0.25,
    gamma: float = 20.0,
    nrad: int = 2,
    mdis: int = 20,
    ucubic: bool = True,
    cost3: bool = True,
    vcheck: int = 2,
    vthresh: tuple[float | None, float | None, float | None] | None = (
        32.0,
        64.0,
        4.0,
    ),
    sclip: VideoNode | Deinterlacer | VSFunctionNoArgs | None = None,
    mclip: VideoNode | VSFunctionNoArgs | None = None,
    *,
    tff: bool | None = None,
    double_rate: bool = True,
    transpose_first: bool = False,
    scaler: ComplexScalerLike = Catrom,
    noshift: bool | Sequence[bool] = False
)

Bases: SuperSampler

Enhanced Edge Directed Interpolation (3rd gen.)

Classes:

  • AADirection

    Enum representing the direction(s) in which anti-aliasing should be applied.

Methods:

  • antialias

    Apply anti-aliasing to the given clip.

  • bob

    Apply bob deinterlacing to the given clip.

  • copy

    Returns a new Antialiaser class replacing specified fields with new values

  • deinterlace

    Apply deinterlacing to the given clip.

  • get_deint_args

    Retrieves arguments for deinterlacing processing.

  • kernel_radius
  • scale

    Scale the given clip using super sampling method.

  • supersample

    Supersample a clip by a given scaling factor.

  • transpose

    Transpose the input clip by swapping its horizontal and vertical axes.

Attributes:

  • alpha (float) –

    Controls the weight given to connecting similar neighborhoods.

  • beta (float) –

    Controls the weight given to the vertical difference created by the interpolation.

  • cost3 (bool) –

    Defines the neighborhood cost function used to measure similarity.

  • double_rate (bool) –

    Whether to double the FPS.

  • gamma (float) –

    Penalizes changes in interpolation direction.

  • mclip (VideoNode | VSFunctionNoArgs | None) –

    A mask used to apply edge-directed interpolation only to specified pixels.

  • mdis (int) –

    Sets the maximum connection radius. The valid range is [1, 40].

  • noshift (bool | Sequence[bool]) –

    Disables sub-pixel shifting after supersampling.

  • nrad (int) –

    Sets the radius used for computing neighborhood similarity. The valid range is [0, 3].

  • scaler (ComplexScalerLike) –

    Scaler used for downscaling and shifting after supersampling.

  • sclip (VideoNode | Deinterlacer | VSFunctionNoArgs | None) –

    Provides additional control over the interpolation by using a reference clip.

  • tff (bool | None) –

    The field order.

  • transpose_first (bool) –

    Transpose the clip before any operation.

  • ucubic (bool) –

    Determines the type of interpolation used.

  • vcheck (int) –

    Defines the reliability check level for the resulting interpolation. The possible values are:

  • vthresh (tuple[float | None, float | None, float | None] | None) –

    Sequence of three thresholds:

alpha class-attribute instance-attribute

alpha: float = 0.2

Controls the weight given to connecting similar neighborhoods. It must be in the range [0, 1]. A larger value for alpha will connect more lines and edges. Increasing alpha prioritizes connecting similar regions, which can reduce artifacts but may lead to excessive connections.

beta class-attribute instance-attribute

beta: float = 0.25

Controls the weight given to the vertical difference created by the interpolation. It must also be in the range [0, 1], and the sum of alpha and beta must not exceed 1. A larger value for beta will reduce the number of connected lines and edges, making the result less directed by edges. At a value of 1.0, there will be no edge-directed interpolation at all.

cost3 class-attribute instance-attribute

cost3: bool = True

Defines the neighborhood cost function used to measure similarity. - When cost3=True, a 3-neighborhood cost function is used. - When cost3=False, a 1-neighborhood cost function is applied.

double_rate class-attribute instance-attribute

double_rate: bool = True

Whether to double the FPS.

gamma class-attribute instance-attribute

gamma: float = 20.0

Penalizes changes in interpolation direction. The larger the value of gamma, the smoother the interpolation field will be between two lines. The range for gamma is [0, ∞]. Increasing gamma results in a smoother interpolation between lines but may reduce the sharpness of edges.

If lines are not connecting properly, try increasing alpha and possibly decreasing beta/gamma. If unwanted artifacts occur, reduce alpha and consider increasing beta or gamma.

mclip class-attribute instance-attribute

mclip: VideoNode | VSFunctionNoArgs | None = None

A mask used to apply edge-directed interpolation only to specified pixels. Pixels where the mask value is 0 will be interpolated using cubic linear or bicubic methods instead. The primary purpose of the mask is to reduce computational overhead by limiting edge-directed interpolation to certain pixels.

mdis class-attribute instance-attribute

mdis: int = 20

Sets the maximum connection radius. The valid range is [1, 40]. For example, with mdis=20, when interpolating the pixel at (50, 10) (x, y), the farthest connections allowed would be between (30, 9)/(70, 11) and (70, 9)/(30, 11). Larger values for mdis will allow connecting lines with smaller slopes, but this can also increase the chance of artifacts and slow down processing.

noshift class-attribute instance-attribute

noshift: bool | Sequence[bool] = False

Disables sub-pixel shifting after supersampling.

  • bool: Applies to both luma and chroma.
  • Sequence[bool]: First for luma, second for chroma.

nrad class-attribute instance-attribute

nrad: int = 2

Sets the radius used for computing neighborhood similarity. The valid range is [0, 3]. A larger value for nrad will consider a wider neighborhood for similarity, which can improve edge connections but may also increase processing time.

scaler class-attribute instance-attribute

Scaler used for downscaling and shifting after supersampling.

sclip class-attribute instance-attribute

sclip: VideoNode | Deinterlacer | VSFunctionNoArgs | None = None

Provides additional control over the interpolation by using a reference clip. If set to None, vertical cubic interpolation is used as a fallback method instead.

Passing a Deinterlacer object is only supported for pure deinterlacing.

tff class-attribute instance-attribute

tff: bool | None = None

The field order.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

ucubic class-attribute instance-attribute

ucubic: bool = True

Determines the type of interpolation used. - When ucubic=True, cubic 4-point interpolation is applied. - When ucubic=False, 2-point linear interpolation is used.

vcheck class-attribute instance-attribute

vcheck: int = 2

Defines the reliability check level for the resulting interpolation. The possible values are: - 0: No reliability check - 1: Weak reliability check - 2: Medium reliability check - 3: Strong reliability check

vthresh class-attribute instance-attribute

vthresh: tuple[float | None, float | None, float | None] | None = (
    32.0,
    64.0,
    4.0,
)

Sequence of three thresholds: - vthresh[0]: Used to calculate the reliability for the first difference. - vthresh[1]: Used for the second difference. - vthresh[2]: Controls the weighting of the interpolation direction.

AADirection

Bases: IntFlag

Enum representing the direction(s) in which anti-aliasing should be applied.

Attributes:

  • BOTH

    Apply anti-aliasing in both horizontal and vertical directions.

  • HORIZONTAL

    Apply anti-aliasing in the horizontal direction.

  • VERTICAL

    Apply anti-aliasing in the vertical direction.

BOTH class-attribute instance-attribute

Apply anti-aliasing in both horizontal and vertical directions.

HORIZONTAL class-attribute instance-attribute

HORIZONTAL = auto()

Apply anti-aliasing in the horizontal direction.

VERTICAL class-attribute instance-attribute

VERTICAL = auto()

Apply anti-aliasing in the vertical direction.

antialias

antialias(
    clip: VideoNode, direction: AADirection = BOTH, **kwargs: Any
) -> VideoNode

Apply anti-aliasing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • direction

    (AADirection, default: BOTH ) –

    Direction in which to apply anti-aliasing. Defaults to AADirection.BOTH.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Anti-aliased clip.

Source code in vsaa/deinterlacers.py
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
def antialias(
    self, clip: vs.VideoNode, direction: AntiAliaser.AADirection = AntiAliaser.AADirection.BOTH, **kwargs: Any
) -> vs.VideoNode:
    kwargs = self.get_deint_args(**kwargs)

    sclip, mclip = kwargs.pop("sclip"), kwargs.pop("mclip")

    if isinstance(sclip, Deinterlacer):
        raise CustomValueError("sclip must be a callable or VideoNode", self.antialias)

    if sclip and self.double_rate:
        if isinstance(sclip, VSFunctionNoArgs):
            sclip = sclip(clip)

        sclip = core.std.Interleave([sclip, sclip])

    if isinstance(mclip, VSFunctionNoArgs):
        mclip = mclip(clip)

    return super().antialias(clip, direction, sclip=sclip, mclip=mclip, **kwargs)

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> VideoNode

Apply bob deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
def bob(self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any) -> vs.VideoNode:
    kwargs = self.get_deint_args(**kwargs)

    sclip, mclip = kwargs.pop("sclip"), kwargs.pop("mclip")

    if isinstance(sclip, Deinterlacer):
        sclip = sclip.bob(clip, tff=tff)

    if callable(sclip):
        sclip = sclip(clip)

    if callable(mclip):
        mclip = mclip(clip)

    return super().bob(clip, tff=tff, sclip=sclip, mclip=mclip, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code in vsaa/deinterlacers.py
124
125
126
127
128
def copy(self, **kwargs: Any) -> Self:
    """
    Returns a new Antialiaser class replacing specified fields with new values
    """
    return replace(self, **kwargs)

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any
) -> VideoNode

Apply deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool | None, default: None ) –

    Whether to double the frame rate (True) or retain the original rate (False).

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
def deinterlace(
    self,
    clip: vs.VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any,
) -> vs.VideoNode:
    kwargs = self.get_deint_args(**kwargs)

    sclip, mclip = kwargs.pop("sclip"), kwargs.pop("mclip")

    if isinstance(sclip, Deinterlacer):
        sclip = sclip.deinterlace(clip, tff=tff, double_rate=double_rate)

    if callable(sclip):
        sclip = sclip(clip)

    if callable(mclip):
        mclip = mclip(clip)

    return super().deinterlace(clip, tff=tff, double_rate=double_rate, sclip=sclip, mclip=mclip, **kwargs)

get_deint_args

get_deint_args(**kwargs: Any) -> dict[str, Any]

Retrieves arguments for deinterlacing processing.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code in vsaa/deinterlacers.py
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
def get_deint_args(self, **kwargs: Any) -> dict[str, Any]:
    vthresh = (None, None, None) if self.vthresh is None else self.vthresh

    return {
        "alpha": self.alpha,
        "beta": self.beta,
        "gamma": self.gamma,
        "nrad": self.nrad,
        "mdis": self.mdis,
        "ucubic": self.ucubic,
        "cost3": self.cost3,
        "vcheck": self.vcheck,
        "vthresh0": vthresh[0],
        "vthresh1": vthresh[1],
        "vthresh2": vthresh[2],
        "sclip": self.sclip,
        "mclip": self.mclip,
    } | kwargs

kernel_radius

kernel_radius() -> int
Source code in vsaa/deinterlacers.py
681
682
683
@Scaler.cachedproperty
def kernel_radius(self) -> int:
    return self.mdis

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using super sampling method.

Note: Setting tff=True results in less chroma shift for non-centered chroma locations.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the deinterlacing function.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsaa/deinterlacers.py
781
782
783
784
785
786
787
788
789
790
791
792
793
794
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    kwargs = self.get_deint_args(**kwargs)

    if kwargs["sclip"] or kwargs["mclip"]:
        raise CustomNotImplementedError("sclip and mclip are currently not supported.", self.scale)

    return super().scale(clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Note: Setting tff=True results in less chroma shift for non-centered chroma locations.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

Returns:

  • VideoNode

    The supersampled clip.

Source code in vsaa/deinterlacers.py
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Note: Setting `tff=True` results in less chroma shift for non-centered chroma locations.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    ...

transpose

transpose(
    clip: VideoNode,
    *,
    sclip: VideoNode | None = None,
    mclip: VideoNode | None = None,
    **kwargs: Any
) -> tuple[VideoNode, Mapping[str, VideoNode | None]]

Transpose the input clip by swapping its horizontal and vertical axes.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

Returns:

  • tuple[VideoNode, Mapping[str, VideoNode | None]]

    The transposed clip.

Source code in vsaa/deinterlacers.py
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
def transpose(
    self,
    clip: vs.VideoNode,
    *,
    sclip: vs.VideoNode | None = None,
    mclip: vs.VideoNode | None = None,
    **kwargs: Any,
) -> tuple[vs.VideoNode, Mapping[str, vs.VideoNode | None]]:
    # At this point, sclip and mclip can only be a VideoNode or None
    if isinstance(sclip, vs.VideoNode):
        sclip = sclip.std.Transpose()

    if isinstance(mclip, vs.VideoNode):
        mclip = mclip.std.Transpose()

    return clip.std.Transpose(), kwargs | {"sclip": sclip, "mclip": mclip}

NNEDI3 dataclass

NNEDI3(
    nsize: int = 0,
    nns: int = 4,
    qual: int = 2,
    etype: int = 0,
    pscrn: int | None = None,
    opencl: bool = False,
    *,
    tff: bool | None = None,
    double_rate: bool = True,
    transpose_first: bool = False,
    scaler: ComplexScalerLike = Catrom,
    noshift: bool | Sequence[bool] = False
)

Bases: SuperSampler

Neural Network Edge Directed Interpolation (3rd gen.)

More information: https://github.com/sekrit-twc/znedi3

Classes:

  • AADirection

    Enum representing the direction(s) in which anti-aliasing should be applied.

Methods:

  • antialias

    Apply anti-aliasing to the given clip.

  • bob

    Apply bob deinterlacing to the given clip.

  • copy

    Returns a new Antialiaser class replacing specified fields with new values

  • deinterlace

    Apply deinterlacing to the given clip.

  • get_deint_args

    Retrieves arguments for deinterlacing processing.

  • kernel_radius
  • scale

    Scale the given clip using super sampling method.

  • supersample

    Supersample a clip by a given scaling factor.

  • transpose

    Transpose the input clip by swapping its horizontal and vertical axes.

Attributes:

  • double_rate (bool) –

    Whether to double the FPS.

  • etype (int) –

    The set of weights used in the predictor neural network. Possible values:

  • nns (int) –

    Number of neurons in the predictor neural network. Possible values:

  • noshift (bool | Sequence[bool]) –

    Disables sub-pixel shifting after supersampling.

  • nsize (int) –

    Size of the local neighbourhood around each pixel used by the predictor neural network.

  • opencl (bool) –

    Enables the use of the OpenCL variant.

  • pscrn (int | None) –

    The prescreener used to decide which pixels should be processed by the predictor neural network,

  • qual (int) –

    The number of different neural network predictions that are blended together to compute the final output value.

  • scaler (ComplexScalerLike) –

    Scaler used for downscaling and shifting after supersampling.

  • tff (bool | None) –

    The field order.

  • transpose_first (bool) –

    Transpose the clip before any operation.

double_rate class-attribute instance-attribute

double_rate: bool = True

Whether to double the FPS.

etype class-attribute instance-attribute

etype: int = 0

The set of weights used in the predictor neural network. Possible values: - 0: Weights trained to minimise absolute error. - 1: Weights trained to minimise squared error.

nns class-attribute instance-attribute

nns: int = 4

Number of neurons in the predictor neural network. Possible values: - 0: 16 - 1: 32 - 2: 64 - 3: 128 - 4: 256

Wrapper default is 4, plugin default is 1.

noshift class-attribute instance-attribute

noshift: bool | Sequence[bool] = False

Disables sub-pixel shifting after supersampling.

  • bool: Applies to both luma and chroma.
  • Sequence[bool]: First for luma, second for chroma.

nsize class-attribute instance-attribute

nsize: int = 0

Size of the local neighbourhood around each pixel used by the predictor neural network. Possible settings: - 0: 8x6 - 1: 16x6 - 2: 32x6 - 3: 48x6 - 4: 8x4 - 5: 16x4 - 6: 32x4

Wrapper default is 0, plugin default is 6.

opencl class-attribute instance-attribute

opencl: bool = False

Enables the use of the OpenCL variant.

pscrn class-attribute instance-attribute

pscrn: int | None = None

The prescreener used to decide which pixels should be processed by the predictor neural network, and which can be handled by simple cubic interpolation. Since most pixels can be handled by cubic interpolation, using the prescreener generally results in much faster processing. Possible values: - 0: No prescreening. No pixels will be processed with cubic interpolation. This is really slow. - 1: Old prescreener. - 2: New prescreener level 0. - 3: New prescreener level 1. - 4: New prescreener level 2.

The new prescreener is not available with float input.

  • Wrapper default is 4 for integer input and 1 for float input. When opencl=True it is always 1.
  • Plugin default is 2 for integer input and 1 for float input.

qual class-attribute instance-attribute

qual: int = 2

The number of different neural network predictions that are blended together to compute the final output value. Each neural network was trained on a different set of training data. Blending the results of these different networks improves generalisation to unseen data. Possible values are 1 and 2.

Wrapper default is 2, plugin default is 1.

scaler class-attribute instance-attribute

Scaler used for downscaling and shifting after supersampling.

tff class-attribute instance-attribute

tff: bool | None = None

The field order.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

AADirection

Bases: IntFlag

Enum representing the direction(s) in which anti-aliasing should be applied.

Attributes:

  • BOTH

    Apply anti-aliasing in both horizontal and vertical directions.

  • HORIZONTAL

    Apply anti-aliasing in the horizontal direction.

  • VERTICAL

    Apply anti-aliasing in the vertical direction.

BOTH class-attribute instance-attribute

Apply anti-aliasing in both horizontal and vertical directions.

HORIZONTAL class-attribute instance-attribute

HORIZONTAL = auto()

Apply anti-aliasing in the horizontal direction.

VERTICAL class-attribute instance-attribute

VERTICAL = auto()

Apply anti-aliasing in the vertical direction.

antialias

antialias(
    clip: VideoNode, direction: AADirection = BOTH, **kwargs: Any
) -> VideoNode

Apply anti-aliasing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • direction

    (AADirection, default: BOTH ) –

    Direction in which to apply anti-aliasing. Defaults to AADirection.BOTH.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Anti-aliased clip.

Source code in vsaa/deinterlacers.py
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
def antialias(self, clip: vs.VideoNode, direction: AADirection = AADirection.BOTH, **kwargs: Any) -> vs.VideoNode:
    """
    Apply anti-aliasing to the given clip.

    Args:
        clip: The input clip.
        direction: Direction in which to apply anti-aliasing. Defaults to AADirection.BOTH.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Anti-aliased clip.
    """
    tff = fallback(kwargs.pop("tff", self.tff), True)

    for y in sorted(self.AADirection, key=lambda x: x.value, reverse=self.transpose_first):
        if direction in (y, self.AADirection.BOTH):
            if y == self.AADirection.HORIZONTAL:
                clip, tclips = self.transpose(clip, **kwargs)
                kwargs |= tclips

            clip = self._interpolate(clip, tff, self.double_rate, False, **kwargs)

            if self.double_rate:
                clip = core.std.Merge(clip[::2], clip[1::2])

            if y == self.AADirection.HORIZONTAL:
                clip, tclips = self.transpose(clip, **kwargs)
                kwargs |= tclips

    return clip

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> VideoNode

Apply bob deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def bob(self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any) -> vs.VideoNode:
    """
    Apply bob deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, True, False, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code in vsaa/deinterlacers.py
124
125
126
127
128
def copy(self, **kwargs: Any) -> Self:
    """
    Returns a new Antialiaser class replacing specified fields with new values
    """
    return replace(self, **kwargs)

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any
) -> VideoNode

Apply deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool | None, default: None ) –

    Whether to double the frame rate (True) or retain the original rate (False).

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
def deinterlace(
    self,
    clip: vs.VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Apply deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, fallback(double_rate, self.double_rate), False, **kwargs)

get_deint_args

get_deint_args(
    *, clip: VideoNode | None = None, **kwargs: Any
) -> dict[str, Any]

Retrieves arguments for deinterlacing processing.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code in vsaa/deinterlacers.py
453
454
455
456
457
458
459
460
461
462
463
464
def get_deint_args(self, *, clip: vs.VideoNode | None = None, **kwargs: Any) -> dict[str, Any]:
    pscrn = (
        fallback(self.pscrn, 1 if self.opencl or clip.format.sample_type is vs.FLOAT else 4) if clip else self.pscrn
    )

    return {
        "nsize": self.nsize,
        "nns": self.nns,
        "qual": self.qual,
        "etype": self.etype,
        "pscrn": pscrn,
    } | kwargs

kernel_radius

kernel_radius() -> int
Source code in vsaa/deinterlacers.py
441
442
443
444
445
446
447
448
449
450
451
@Scaler.cachedproperty
def kernel_radius(self) -> int:
    match self.nsize:
        case 0 | 4:
            return 8
        case 1 | 5:
            return 16
        case 3:
            return 48
        case _:
            return 32

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using super sampling method.

Note: Setting tff=True results in less chroma shift for non-centered chroma locations.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the deinterlacing function.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsaa/deinterlacers.py
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using super sampling method.

    Note: Setting `tff=True` results in less chroma shift for non-centered chroma locations.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the deinterlacing function.

    Returns:
        The scaled clip.
    """
    tff_fallback = fallback(kwargs.pop("tff", self.tff), True)

    dims = self._wh_norm(clip, width, height)
    dest_dimensions = list(dims)
    sy, sx = shift

    cloc = list(ChromaLocation.from_video(clip).get_offsets(clip))
    subsampling = [2**clip.format.subsampling_w, 2**clip.format.subsampling_h]

    nshift: list[list[float]] = [
        normalize_seq(sx, clip.format.num_planes),
        normalize_seq(sy, clip.format.num_planes),
    ]

    if not self.transpose_first:
        dest_dimensions.reverse()
        cloc.reverse()
        subsampling.reverse()
        nshift.reverse()

    for x, dim in enumerate(dest_dimensions):
        is_width = (not x and self.transpose_first) or (not self.transpose_first and x)

        if is_width:
            clip, _ = self.transpose(clip)

        while clip.height < dim:
            delta = max(nshift[x], key=lambda y: abs(y))
            tff = False if delta < 0 else True if delta > 0 else tff_fallback
            offset = -0.25 if tff else 0.25

            for y in range(clip.format.num_planes):
                if not y:
                    nshift[x][y] = (nshift[x][y] + offset) * 2
                else:
                    nshift[x][y] = (nshift[x][y] + offset) * 2 - cloc[x] / subsampling[x]

            clip = self._interpolate(clip, tff, False, True, **kwargs)

        if is_width:
            clip, _ = self.transpose(clip)

    if not self.transpose_first:
        nshift.reverse()

    self._ss_shifts = nshift

    if self.noshift:
        noshift = normalize_seq(self.noshift, clip.format.num_planes)

        if all(noshift) and dims == (clip.width, clip.height):
            return clip

        for ns in nshift:
            for i in range(len(ns)):
                ns[i] *= not noshift[i]

    return ComplexScaler.ensure_obj(self.scaler, self.__class__).scale(clip, width, height, (nshift[1], nshift[0]))

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Note: Setting tff=True results in less chroma shift for non-centered chroma locations.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

Returns:

  • VideoNode

    The supersampled clip.

Source code in vsaa/deinterlacers.py
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Note: Setting `tff=True` results in less chroma shift for non-centered chroma locations.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    ...

transpose

transpose(
    clip: VideoNode, **kwargs: Any
) -> tuple[VideoNode, Mapping[str, VideoNode | None]]

Transpose the input clip by swapping its horizontal and vertical axes.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

Returns:

  • tuple[VideoNode, Mapping[str, VideoNode | None]]

    The transposed clip.

Source code in vsaa/deinterlacers.py
229
230
231
232
233
234
235
236
237
238
239
def transpose(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[vs.VideoNode, Mapping[str, vs.VideoNode | None]]:
    """
    Transpose the input clip by swapping its horizontal and vertical axes.

    Args:
        clip: The input clip.

    Returns:
        The transposed clip.
    """
    return clip.std.Transpose(), {}

SangNom dataclass

SangNom(
    aa: int | Sequence[int] | None = None,
    *,
    tff: bool | None = None,
    double_rate: bool = True,
    transpose_first: bool = False,
    scaler: ComplexScalerLike = Catrom,
    noshift: bool | Sequence[bool] = False
)

Bases: SuperSampler

SangNom single field deinterlacer using edge-directed interpolation

Classes:

  • AADirection

    Enum representing the direction(s) in which anti-aliasing should be applied.

Methods:

  • antialias

    Apply anti-aliasing to the given clip.

  • bob

    Apply bob deinterlacing to the given clip.

  • copy

    Returns a new Antialiaser class replacing specified fields with new values

  • deinterlace

    Apply deinterlacing to the given clip.

  • get_deint_args

    Retrieves arguments for deinterlacing processing.

  • scale

    Scale the given clip using super sampling method.

  • supersample

    Supersample a clip by a given scaling factor.

  • transpose

    Transpose the input clip by swapping its horizontal and vertical axes.

Attributes:

aa class-attribute instance-attribute

aa: int | Sequence[int] | None = None

The strength of luma anti-aliasing, applied to an 8-bit clip. Must be an integer between 0 and 128, inclusive.

double_rate class-attribute instance-attribute

double_rate: bool = True

Whether to double the FPS.

noshift class-attribute instance-attribute

noshift: bool | Sequence[bool] = False

Disables sub-pixel shifting after supersampling.

  • bool: Applies to both luma and chroma.
  • Sequence[bool]: First for luma, second for chroma.

scaler class-attribute instance-attribute

Scaler used for downscaling and shifting after supersampling.

tff class-attribute instance-attribute

tff: bool | None = None

The field order.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

AADirection

Bases: IntFlag

Enum representing the direction(s) in which anti-aliasing should be applied.

Attributes:

  • BOTH

    Apply anti-aliasing in both horizontal and vertical directions.

  • HORIZONTAL

    Apply anti-aliasing in the horizontal direction.

  • VERTICAL

    Apply anti-aliasing in the vertical direction.

BOTH class-attribute instance-attribute

Apply anti-aliasing in both horizontal and vertical directions.

HORIZONTAL class-attribute instance-attribute

HORIZONTAL = auto()

Apply anti-aliasing in the horizontal direction.

VERTICAL class-attribute instance-attribute

VERTICAL = auto()

Apply anti-aliasing in the vertical direction.

antialias

antialias(
    clip: VideoNode, direction: AADirection = BOTH, **kwargs: Any
) -> VideoNode

Apply anti-aliasing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • direction

    (AADirection, default: BOTH ) –

    Direction in which to apply anti-aliasing. Defaults to AADirection.BOTH.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Anti-aliased clip.

Source code in vsaa/deinterlacers.py
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
def antialias(self, clip: vs.VideoNode, direction: AADirection = AADirection.BOTH, **kwargs: Any) -> vs.VideoNode:
    """
    Apply anti-aliasing to the given clip.

    Args:
        clip: The input clip.
        direction: Direction in which to apply anti-aliasing. Defaults to AADirection.BOTH.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Anti-aliased clip.
    """
    tff = fallback(kwargs.pop("tff", self.tff), True)

    for y in sorted(self.AADirection, key=lambda x: x.value, reverse=self.transpose_first):
        if direction in (y, self.AADirection.BOTH):
            if y == self.AADirection.HORIZONTAL:
                clip, tclips = self.transpose(clip, **kwargs)
                kwargs |= tclips

            clip = self._interpolate(clip, tff, self.double_rate, False, **kwargs)

            if self.double_rate:
                clip = core.std.Merge(clip[::2], clip[1::2])

            if y == self.AADirection.HORIZONTAL:
                clip, tclips = self.transpose(clip, **kwargs)
                kwargs |= tclips

    return clip

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> VideoNode

Apply bob deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def bob(self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any) -> vs.VideoNode:
    """
    Apply bob deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, True, False, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code in vsaa/deinterlacers.py
124
125
126
127
128
def copy(self, **kwargs: Any) -> Self:
    """
    Returns a new Antialiaser class replacing specified fields with new values
    """
    return replace(self, **kwargs)

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any
) -> VideoNode

Apply deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool | None, default: None ) –

    Whether to double the frame rate (True) or retain the original rate (False).

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
def deinterlace(
    self,
    clip: vs.VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Apply deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, fallback(double_rate, self.double_rate), False, **kwargs)

get_deint_args

get_deint_args(**kwargs: Any) -> dict[str, Any]

Retrieves arguments for deinterlacing processing.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code in vsaa/deinterlacers.py
818
819
def get_deint_args(self, **kwargs: Any) -> dict[str, Any]:
    return {"aa": self.aa} | kwargs

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using super sampling method.

Note: Setting tff=True results in less chroma shift for non-centered chroma locations.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the deinterlacing function.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsaa/deinterlacers.py
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using super sampling method.

    Note: Setting `tff=True` results in less chroma shift for non-centered chroma locations.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the deinterlacing function.

    Returns:
        The scaled clip.
    """
    tff_fallback = fallback(kwargs.pop("tff", self.tff), True)

    dims = self._wh_norm(clip, width, height)
    dest_dimensions = list(dims)
    sy, sx = shift

    cloc = list(ChromaLocation.from_video(clip).get_offsets(clip))
    subsampling = [2**clip.format.subsampling_w, 2**clip.format.subsampling_h]

    nshift: list[list[float]] = [
        normalize_seq(sx, clip.format.num_planes),
        normalize_seq(sy, clip.format.num_planes),
    ]

    if not self.transpose_first:
        dest_dimensions.reverse()
        cloc.reverse()
        subsampling.reverse()
        nshift.reverse()

    for x, dim in enumerate(dest_dimensions):
        is_width = (not x and self.transpose_first) or (not self.transpose_first and x)

        if is_width:
            clip, _ = self.transpose(clip)

        while clip.height < dim:
            delta = max(nshift[x], key=lambda y: abs(y))
            tff = False if delta < 0 else True if delta > 0 else tff_fallback
            offset = -0.25 if tff else 0.25

            for y in range(clip.format.num_planes):
                if not y:
                    nshift[x][y] = (nshift[x][y] + offset) * 2
                else:
                    nshift[x][y] = (nshift[x][y] + offset) * 2 - cloc[x] / subsampling[x]

            clip = self._interpolate(clip, tff, False, True, **kwargs)

        if is_width:
            clip, _ = self.transpose(clip)

    if not self.transpose_first:
        nshift.reverse()

    self._ss_shifts = nshift

    if self.noshift:
        noshift = normalize_seq(self.noshift, clip.format.num_planes)

        if all(noshift) and dims == (clip.width, clip.height):
            return clip

        for ns in nshift:
            for i in range(len(ns)):
                ns[i] *= not noshift[i]

    return ComplexScaler.ensure_obj(self.scaler, self.__class__).scale(clip, width, height, (nshift[1], nshift[0]))

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Note: Setting tff=True results in less chroma shift for non-centered chroma locations.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

Returns:

  • VideoNode

    The supersampled clip.

Source code in vsaa/deinterlacers.py
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Note: Setting `tff=True` results in less chroma shift for non-centered chroma locations.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    ...

transpose

transpose(
    clip: VideoNode, **kwargs: Any
) -> tuple[VideoNode, Mapping[str, VideoNode | None]]

Transpose the input clip by swapping its horizontal and vertical axes.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

Returns:

  • tuple[VideoNode, Mapping[str, VideoNode | None]]

    The transposed clip.

Source code in vsaa/deinterlacers.py
229
230
231
232
233
234
235
236
237
238
239
def transpose(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[vs.VideoNode, Mapping[str, vs.VideoNode | None]]:
    """
    Transpose the input clip by swapping its horizontal and vertical axes.

    Args:
        clip: The input clip.

    Returns:
        The transposed clip.
    """
    return clip.std.Transpose(), {}

SuperSampler dataclass

SuperSampler(
    *,
    tff: bool | None = None,
    double_rate: bool = True,
    transpose_first: bool = False,
    scaler: ComplexScalerLike = Catrom,
    noshift: bool | Sequence[bool] = False
)

Bases: Scaler, AntiAliaser, ABC

Abstract base class for supersampling operations.

Classes:

  • AADirection

    Enum representing the direction(s) in which anti-aliasing should be applied.

Methods:

  • antialias

    Apply anti-aliasing to the given clip.

  • bob

    Apply bob deinterlacing to the given clip.

  • copy

    Returns a new Antialiaser class replacing specified fields with new values

  • deinterlace

    Apply deinterlacing to the given clip.

  • get_deint_args

    Retrieves arguments for deinterlacing processing.

  • scale

    Scale the given clip using super sampling method.

  • supersample

    Supersample a clip by a given scaling factor.

  • transpose

    Transpose the input clip by swapping its horizontal and vertical axes.

Attributes:

double_rate class-attribute instance-attribute

double_rate: bool = True

Whether to double the FPS.

noshift class-attribute instance-attribute

noshift: bool | Sequence[bool] = False

Disables sub-pixel shifting after supersampling.

  • bool: Applies to both luma and chroma.
  • Sequence[bool]: First for luma, second for chroma.

scaler class-attribute instance-attribute

Scaler used for downscaling and shifting after supersampling.

tff class-attribute instance-attribute

tff: bool | None = None

The field order.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

AADirection

Bases: IntFlag

Enum representing the direction(s) in which anti-aliasing should be applied.

Attributes:

  • BOTH

    Apply anti-aliasing in both horizontal and vertical directions.

  • HORIZONTAL

    Apply anti-aliasing in the horizontal direction.

  • VERTICAL

    Apply anti-aliasing in the vertical direction.

BOTH class-attribute instance-attribute

Apply anti-aliasing in both horizontal and vertical directions.

HORIZONTAL class-attribute instance-attribute

HORIZONTAL = auto()

Apply anti-aliasing in the horizontal direction.

VERTICAL class-attribute instance-attribute

VERTICAL = auto()

Apply anti-aliasing in the vertical direction.

antialias

antialias(
    clip: VideoNode, direction: AADirection = BOTH, **kwargs: Any
) -> VideoNode

Apply anti-aliasing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • direction

    (AADirection, default: BOTH ) –

    Direction in which to apply anti-aliasing. Defaults to AADirection.BOTH.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Anti-aliased clip.

Source code in vsaa/deinterlacers.py
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
def antialias(self, clip: vs.VideoNode, direction: AADirection = AADirection.BOTH, **kwargs: Any) -> vs.VideoNode:
    """
    Apply anti-aliasing to the given clip.

    Args:
        clip: The input clip.
        direction: Direction in which to apply anti-aliasing. Defaults to AADirection.BOTH.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Anti-aliased clip.
    """
    tff = fallback(kwargs.pop("tff", self.tff), True)

    for y in sorted(self.AADirection, key=lambda x: x.value, reverse=self.transpose_first):
        if direction in (y, self.AADirection.BOTH):
            if y == self.AADirection.HORIZONTAL:
                clip, tclips = self.transpose(clip, **kwargs)
                kwargs |= tclips

            clip = self._interpolate(clip, tff, self.double_rate, False, **kwargs)

            if self.double_rate:
                clip = core.std.Merge(clip[::2], clip[1::2])

            if y == self.AADirection.HORIZONTAL:
                clip, tclips = self.transpose(clip, **kwargs)
                kwargs |= tclips

    return clip

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> VideoNode

Apply bob deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def bob(self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any) -> vs.VideoNode:
    """
    Apply bob deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, True, False, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code in vsaa/deinterlacers.py
124
125
126
127
128
def copy(self, **kwargs: Any) -> Self:
    """
    Returns a new Antialiaser class replacing specified fields with new values
    """
    return replace(self, **kwargs)

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any
) -> VideoNode

Apply deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool | None, default: None ) –

    Whether to double the frame rate (True) or retain the original rate (False).

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
def deinterlace(
    self,
    clip: vs.VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Apply deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, fallback(double_rate, self.double_rate), False, **kwargs)

get_deint_args abstractmethod

get_deint_args(**kwargs: Any) -> dict[str, Any]

Retrieves arguments for deinterlacing processing.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code in vsaa/deinterlacers.py
130
131
132
133
134
135
136
137
138
139
140
141
@abstractmethod
def get_deint_args(self, **kwargs: Any) -> dict[str, Any]:
    """
    Retrieves arguments for deinterlacing processing.

    Args:
        **kwargs: Additional arguments.

    Returns:
        Passed keyword arguments.
    """
    return kwargs

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using super sampling method.

Note: Setting tff=True results in less chroma shift for non-centered chroma locations.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the deinterlacing function.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsaa/deinterlacers.py
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Scale the given clip using super sampling method.

    Note: Setting `tff=True` results in less chroma shift for non-centered chroma locations.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the deinterlacing function.

    Returns:
        The scaled clip.
    """
    tff_fallback = fallback(kwargs.pop("tff", self.tff), True)

    dims = self._wh_norm(clip, width, height)
    dest_dimensions = list(dims)
    sy, sx = shift

    cloc = list(ChromaLocation.from_video(clip).get_offsets(clip))
    subsampling = [2**clip.format.subsampling_w, 2**clip.format.subsampling_h]

    nshift: list[list[float]] = [
        normalize_seq(sx, clip.format.num_planes),
        normalize_seq(sy, clip.format.num_planes),
    ]

    if not self.transpose_first:
        dest_dimensions.reverse()
        cloc.reverse()
        subsampling.reverse()
        nshift.reverse()

    for x, dim in enumerate(dest_dimensions):
        is_width = (not x and self.transpose_first) or (not self.transpose_first and x)

        if is_width:
            clip, _ = self.transpose(clip)

        while clip.height < dim:
            delta = max(nshift[x], key=lambda y: abs(y))
            tff = False if delta < 0 else True if delta > 0 else tff_fallback
            offset = -0.25 if tff else 0.25

            for y in range(clip.format.num_planes):
                if not y:
                    nshift[x][y] = (nshift[x][y] + offset) * 2
                else:
                    nshift[x][y] = (nshift[x][y] + offset) * 2 - cloc[x] / subsampling[x]

            clip = self._interpolate(clip, tff, False, True, **kwargs)

        if is_width:
            clip, _ = self.transpose(clip)

    if not self.transpose_first:
        nshift.reverse()

    self._ss_shifts = nshift

    if self.noshift:
        noshift = normalize_seq(self.noshift, clip.format.num_planes)

        if all(noshift) and dims == (clip.width, clip.height):
            return clip

        for ns in nshift:
            for i in range(len(ns)):
                ns[i] *= not noshift[i]

    return ComplexScaler.ensure_obj(self.scaler, self.__class__).scale(clip, width, height, (nshift[1], nshift[0]))

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Note: Setting tff=True results in less chroma shift for non-centered chroma locations.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

Returns:

  • VideoNode

    The supersampled clip.

Source code in vsaa/deinterlacers.py
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Note: Setting `tff=True` results in less chroma shift for non-centered chroma locations.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    ...

transpose

transpose(
    clip: VideoNode, **kwargs: Any
) -> tuple[VideoNode, Mapping[str, VideoNode | None]]

Transpose the input clip by swapping its horizontal and vertical axes.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

Returns:

  • tuple[VideoNode, Mapping[str, VideoNode | None]]

    The transposed clip.

Source code in vsaa/deinterlacers.py
229
230
231
232
233
234
235
236
237
238
239
def transpose(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[vs.VideoNode, Mapping[str, vs.VideoNode | None]]:
    """
    Transpose the input clip by swapping its horizontal and vertical axes.

    Args:
        clip: The input clip.

    Returns:
        The transposed clip.
    """
    return clip.std.Transpose(), {}

SuperSamplerProcess

SuperSamplerProcess(
    *,
    function: VSFunctionNoArgs,
    noshift: bool | Sequence[bool] = True,
    **kwargs: Any
)

Bases: MixedScalerProcess[_SuperSamplerWithNNEDI3DefaultT, Point], _ConcreteSuperSampler

A utility SuperSampler class that applies a given function to a supersampled clip, then downsamples it back using Point.

If used without a specified scaler, it defaults to inheriting from NNEDI3.

Initialize the SuperSamplerProcess.

Note

Chroma planes will not align properly during processing. Avoid using this class if accurate chroma placement relative to luma is required.

Example:

processed = SuperSamplerProcess[NNEDI3](function=lambda clip: cool_function(clip, ...)).supersample(
    src, rfactor=2
)

Parameters:

  • function

    (VSFunctionNoArgs) –

    A function to apply on the supersampled clip.

  • noshift

    (bool | Sequence[bool], default: True ) –

    Disables sub-pixel shifting after supersampling.

    • bool: Applies to both luma and chroma.
    • Sequence[bool]: First for luma, second for chroma.
  • **kwargs

    (Any, default: {} ) –

    Additional arguments to the specialized SuperSampler.

Classes:

  • AADirection

    Enum representing the direction(s) in which anti-aliasing should be applied.

Methods:

  • antialias

    Apply anti-aliasing to the given clip.

  • bob

    Apply bob deinterlacing to the given clip.

  • copy

    Returns a new Antialiaser class replacing specified fields with new values

  • deinterlace

    Apply deinterlacing to the given clip.

  • get_deint_args

    Retrieves arguments for deinterlacing processing.

  • scale

    Scale the given clip using super sampling method.

  • supersample

    Supersample a clip by a given scaling factor.

  • transpose

    Transpose the input clip by swapping its horizontal and vertical axes.

Attributes:

Source code in vsaa/deinterlacers.py
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
def __init__(self, *, function: VSFunctionNoArgs, noshift: bool | Sequence[bool] = True, **kwargs: Any) -> None:
    """
    Initialize the SuperSamplerProcess.

    Note:
        Chroma planes will not align properly during processing.
        Avoid using this class if accurate chroma placement relative to luma is required.

    Example:
    ```py
    processed = SuperSamplerProcess[NNEDI3](function=lambda clip: cool_function(clip, ...)).supersample(
        src, rfactor=2
    )
    ```

    Args:
        function: A function to apply on the supersampled clip.
        noshift: Disables sub-pixel shifting after supersampling.

               - `bool`: Applies to both luma and chroma.
               - `Sequence[bool]`: First for luma, second for chroma.

        **kwargs: Additional arguments to the specialized SuperSampler.
    """
    super().__init__(function=function, noshift=noshift, **kwargs)

default_scaler class-attribute instance-attribute

default_scaler = NNEDI3

double_rate class-attribute instance-attribute

double_rate: bool = True

Whether to double the FPS.

noshift class-attribute instance-attribute

noshift: bool | Sequence[bool] = False

Disables sub-pixel shifting after supersampling.

  • bool: Applies to both luma and chroma.
  • Sequence[bool]: First for luma, second for chroma.

scaler class-attribute instance-attribute

Scaler used for downscaling and shifting after supersampling.

tff class-attribute instance-attribute

tff: bool | None = None

The field order.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

AADirection

Bases: IntFlag

Enum representing the direction(s) in which anti-aliasing should be applied.

Attributes:

  • BOTH

    Apply anti-aliasing in both horizontal and vertical directions.

  • HORIZONTAL

    Apply anti-aliasing in the horizontal direction.

  • VERTICAL

    Apply anti-aliasing in the vertical direction.

BOTH class-attribute instance-attribute

Apply anti-aliasing in both horizontal and vertical directions.

HORIZONTAL class-attribute instance-attribute

HORIZONTAL = auto()

Apply anti-aliasing in the horizontal direction.

VERTICAL class-attribute instance-attribute

VERTICAL = auto()

Apply anti-aliasing in the vertical direction.

antialias

antialias(
    clip: VideoNode, direction: AADirection = BOTH, **kwargs: Any
) -> VideoNode

Apply anti-aliasing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • direction

    (AADirection, default: BOTH ) –

    Direction in which to apply anti-aliasing. Defaults to AADirection.BOTH.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Anti-aliased clip.

Source code in vsaa/deinterlacers.py
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
def antialias(self, clip: vs.VideoNode, direction: AADirection = AADirection.BOTH, **kwargs: Any) -> vs.VideoNode:
    """
    Apply anti-aliasing to the given clip.

    Args:
        clip: The input clip.
        direction: Direction in which to apply anti-aliasing. Defaults to AADirection.BOTH.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Anti-aliased clip.
    """
    tff = fallback(kwargs.pop("tff", self.tff), True)

    for y in sorted(self.AADirection, key=lambda x: x.value, reverse=self.transpose_first):
        if direction in (y, self.AADirection.BOTH):
            if y == self.AADirection.HORIZONTAL:
                clip, tclips = self.transpose(clip, **kwargs)
                kwargs |= tclips

            clip = self._interpolate(clip, tff, self.double_rate, False, **kwargs)

            if self.double_rate:
                clip = core.std.Merge(clip[::2], clip[1::2])

            if y == self.AADirection.HORIZONTAL:
                clip, tclips = self.transpose(clip, **kwargs)
                kwargs |= tclips

    return clip

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> VideoNode

Apply bob deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def bob(self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any) -> vs.VideoNode:
    """
    Apply bob deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, True, False, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code in vsaa/deinterlacers.py
124
125
126
127
128
def copy(self, **kwargs: Any) -> Self:
    """
    Returns a new Antialiaser class replacing specified fields with new values
    """
    return replace(self, **kwargs)

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any
) -> VideoNode

Apply deinterlacing to the given clip.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool | None, default: None ) –

    Whether to double the frame rate (True) or retain the original rate (False).

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the plugin function.

Returns:

  • VideoNode

    Deinterlaced clip.

Source code in vsaa/deinterlacers.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
def deinterlace(
    self,
    clip: vs.VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool | None = None,
    **kwargs: Any,
) -> vs.VideoNode:
    """
    Apply deinterlacing to the given clip.

    Args:
        clip: The input clip.
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).
        **kwargs: Additional arguments passed to the plugin function.

    Returns:
        Deinterlaced clip.
    """
    field_based = FieldBased.from_param_or_video(fallback(tff, self.tff, default=None), clip, True, self.__class__)

    return self._interpolate(clip, field_based.is_tff, fallback(double_rate, self.double_rate), False, **kwargs)

get_deint_args

get_deint_args(**kwargs: Any) -> dict[str, Any]

Retrieves arguments for deinterlacing processing.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code in vsaa/deinterlacers.py
880
def get_deint_args(self, **kwargs: Any) -> dict[str, Any]: ...

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using super sampling method.

Note: Setting tff=True results in less chroma shift for non-centered chroma locations.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the deinterlacing function.

Returns:

  • VideoNode

    The scaled clip.

Source code in vsaa/deinterlacers.py
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> vs.VideoNode:
    ss_clip = super().scale(clip, width, height, shift, **kwargs)

    processed = self.function(ss_clip)

    return (
        self._others[0]
        .scale(
            processed,
            clip.width,
            clip.height,
            tuple([round(s - 1e-6) for s in dim_shifts] for dim_shifts in reversed(self._ss_shifts)),  # type: ignore[arg-type]
        )
        .std.CopyFrameProps(processed)
    )

supersample

supersample(
    clip: VideoNode,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Supersample a clip by a given scaling factor.

Note: Setting tff=True results in less chroma shift for non-centered chroma locations.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

Returns:

  • VideoNode

    The supersampled clip.

Source code in vsaa/deinterlacers.py
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
def supersample(
    self, clip: vs.VideoNode, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> vs.VideoNode:
    """
    Supersample a clip by a given scaling factor.

    Note: Setting `tff=True` results in less chroma shift for non-centered chroma locations.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    ...

transpose

transpose(
    clip: VideoNode, **kwargs: Any
) -> tuple[VideoNode, Mapping[str, VideoNode | None]]

Transpose the input clip by swapping its horizontal and vertical axes.

Parameters:

  • clip

    (VideoNode) –

    The input clip.

Returns:

  • tuple[VideoNode, Mapping[str, VideoNode | None]]

    The transposed clip.

Source code in vsaa/deinterlacers.py
229
230
231
232
233
234
235
236
237
238
239
def transpose(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[vs.VideoNode, Mapping[str, vs.VideoNode | None]]:
    """
    Transpose the input clip by swapping its horizontal and vertical axes.

    Args:
        clip: The input clip.

    Returns:
        The transposed clip.
    """
    return clip.std.Transpose(), {}