Skip to content

eedi2

This module implements wrappers for the Enhanced Edge Directed Interpolation (2nd gen.)

Classes:

  • EEDI2

    Base class for EEDI2 interpolating methods.

  • Eedi2

    Full implementation of the EEDI2 anti-aliaser

  • Eedi2DR

    Concrete implementation of EEDI2 used as a double-rater.

  • Eedi2SS

    Concrete implementation of EEDI2 used as a supersampler.

EEDI2 dataclass

EEDI2(
    mthresh: int = 10,
    lthresh: int = 20,
    vthresh: int = 20,
    estr: int = 2,
    dstr: int = 4,
    maxd: int = 24,
    pp: int = 1,
    cuda: bool = False,
    num_streams: int = 1,
    device_id: int = -1,
    *,
    field: int = 0,
    drop_fields: bool = True,
    transpose_first: bool = False,
    shifter: KernelT = Catrom,
    scaler: ScalerT | None = None
)

Bases: _FullInterpolate, Interpolater

Base class for EEDI2 interpolating methods.

Methods:

Attributes:

  • cuda (bool) –

    Enables the use of the CUDA variant for processing.

  • device_id (int) –

    Specifies the target CUDA device.

  • drop_fields (bool) –

    Whether to discard the unused field based on the field setting.

  • dstr (int) –

    Defines the required number of edge pixels (>=) in a 3x3 area, in which the center pixel

  • estr (int) –

    Defines the required number of edge pixels (<=) in a 3x3 area, in which the center pixel

  • field (int) –

    Controls the mode of operation and which field is kept in the resized image.

  • lthresh (int) –

    Controls the Laplacian threshold used in edge detection.

  • maxd (int) –

    Sets the maximum pixel search distance for determining the interpolation direction.

  • mthresh (int) –

    Controls the edge magnitude threshold used in edge detection for building the initial edge map.

  • num_streams (int) –

    Specifies the number of CUDA streams.

  • pp (int) –

    Enables two optional post-processing modes designed to reduce artifacts by identifying problem areas

  • scaler (ScalerT | None) –

    Scaler used for additional scaling operations. If None, default to shifter

  • shifter (KernelT) –

    Kernel used for shifting operations. Default to Catrom.

  • transpose_first (bool) –

    Transpose the clip before any operation.

  • vthresh (int) –

    Controls the variance threshold used in edge detection.

cuda class-attribute instance-attribute

cuda: bool = False

Enables the use of the CUDA variant for processing. Note that full interpolating is only supported by CUDA.

device_id class-attribute instance-attribute

device_id: int = -1

Specifies the target CUDA device. The default value (-1) triggers auto-detection of the available device.

drop_fields class-attribute instance-attribute

drop_fields: bool = True

Whether to discard the unused field based on the field setting.

dstr class-attribute instance-attribute

dstr: int = 4

Defines the required number of edge pixels (>=) in a 3x3 area, in which the center pixel has not been detected as an edge pixel, for the center pixel to be added to the edge map.

estr class-attribute instance-attribute

estr: int = 2

Defines the required number of edge pixels (<=) in a 3x3 area, in which the center pixel has been detected as an edge pixel, for the center pixel to be removed from the edge map.

field class-attribute instance-attribute

field: int = 0

Controls the mode of operation and which field is kept in the resized image. - 0: Same rate, keeps the bottom field. - 1: Same rate, keeps the top field. - 2: Double rate (alternates each frame), starts with the bottom field. - 3: Double rate (alternates each frame), starts with the top field.

lthresh class-attribute instance-attribute

lthresh: int = 20

Controls the Laplacian threshold used in edge detection. Its range is from 0 to 510, with lower values detecting weaker lines.

maxd class-attribute instance-attribute

maxd: int = 24

Sets the maximum pixel search distance for determining the interpolation direction. Larger values allow the algorithm to connect edges and lines with smaller slopes but may introduce artifacts. In some cases, using a smaller maxd value can yield better results than a larger one. The maximum possible value for maxd is 29.

mthresh class-attribute instance-attribute

mthresh: int = 10

Controls the edge magnitude threshold used in edge detection for building the initial edge map. Its range is from 0 to 255, with lower values detecting weaker edges.

num_streams class-attribute instance-attribute

num_streams: int = 1

Specifies the number of CUDA streams.

pp class-attribute instance-attribute

pp: int = 1

Enables two optional post-processing modes designed to reduce artifacts by identifying problem areas and applying simple vertical linear interpolation in those areas. While these modes can improve results, they may slow down processing and slightly reduce edge sharpness. - 0 = No post-processing - 1 = Check for spatial consistency of final interpolation directions - 2 = Check for junctions and corners - 3 = Apply both checks from 1 and 2

Only pp=0 and pp=1 is implemented for the CUDA variant.

scaler class-attribute instance-attribute

scaler: ScalerT | None = None

Scaler used for additional scaling operations. If None, default to shifter

shifter class-attribute instance-attribute

shifter: KernelT = Catrom

Kernel used for shifting operations. Default to Catrom.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

vthresh class-attribute instance-attribute

vthresh: int = 20

Controls the variance threshold used in edge detection. Its range is from 0 to a large number, with lower values detecting weaker edges.

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code
137
138
139
def copy(self, **kwargs: Any) -> Self:
    """Returns a new Antialiaser class replacing specified fields with new values"""
    return replace(self, **kwargs)

full_interpolate

full_interpolate(
    clip: VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
150
151
152
153
154
155
156
157
158
159
160
def full_interpolate(
    self, clip: vs.VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode:
    if not all([double_y, double_x]):
        raise CustomRuntimeError(
            "`double_y` and `double_x` should be set to True to use full_interpolate!",
            self.full_interpolate,
            (double_y, double_x)
        )

    return core.eedi2cuda.Enlarge2(clip, **self.get_aa_args(clip) | kwargs)

get_aa_args

get_aa_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]
Source code
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
def get_aa_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    if (pp := self.pp) > 1 and self.cuda:
        from warnings import warn
        warn(
            f"{self.__class__.__name__}: Only `pp=0` and `pp=1` is implemented for the CUDA variant. "
            "Falling back to `pp=1`...",
            Warning
        )
        pp = 1

    args = dict(
        mthresh=self.mthresh,
        lthresh=self.lthresh,
        vthresh=self.vthresh,
        estr=self.estr,
        dstr=self.dstr,
        maxd=self.maxd,
        pp=pp
    )

    if self.cuda:
        args.update(num_streams=self.num_streams, device_id=self.device_id)

    return args | kwargs

interpolate

interpolate(
    clip: VideoNode, double_y: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
def interpolate(self, clip: vs.VideoNode, double_y: bool, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    kwargs = self.get_aa_args(clip) | kwargs

    if self.cuda:
        inter = core.eedi2cuda.EEDI2(clip, self.field, **kwargs)
    else:
        inter = core.eedi2.EEDI2(clip, self.field, **kwargs)

    if not double_y:
        if self.drop_fields:
            inter = inter.std.SeparateFields(not self.field)[::2]

            inter = self._shifter.shift(inter, (0.5 - 0.75 * self.field, 0))
        else:
            inter = self._scaler.scale(  # type: ignore[assignment]
                inter, clip.width, clip.height, (self._shift * int(not self.field), 0)
            )

    return self._post_interpolate(clip, inter, double_y)  # pyright: ignore[reportArgumentType]

is_full_interpolate_enabled

is_full_interpolate_enabled(x: bool, y: bool) -> bool
Source code
100
101
def is_full_interpolate_enabled(self, x: bool, y: bool) -> bool:
    return self.cuda and x and y

shift_interpolate

shift_interpolate(
    clip: VideoNode, inter: VideoNode, double_y: bool
) -> ConstantFormatVideoNode

Applies a post-shifting interpolation operation to the interpolated clip.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • inter

    (VideoNode) –

    Interpolated clip.

  • double_y

    (bool) –

    Whether the height has been doubled

Returns:

  • ConstantFormatVideoNode

    Shifted clip.

Source code
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
def shift_interpolate(
    self,
    clip: vs.VideoNode,
    inter: vs.VideoNode,
    double_y: bool,
) -> ConstantFormatVideoNode:
    """
    Applies a post-shifting interpolation operation to the interpolated clip.

    :param clip:        Source clip.
    :param inter:       Interpolated clip.
    :param double_y:    Whether the height has been doubled
    :return:            Shifted clip.
    """
    assert check_variable(clip, self.__class__)
    assert check_variable(inter, self.__class__)

    if not double_y and not self.drop_fields:
        shift = (self._shift * int(not self.field), 0)

        inter = self._scaler.scale(inter, clip.width, clip.height, shift)

        return self._post_interpolate(clip, inter, double_y)  # type: ignore[arg-type]

    return inter

Eedi2 dataclass

Eedi2(
    mthresh: int = 10,
    lthresh: int = 20,
    vthresh: int = 20,
    estr: int = 2,
    dstr: int = 4,
    maxd: int = 24,
    pp: int = 1,
    cuda: bool = False,
    num_streams: int = 1,
    device_id: int = -1,
    *,
    field: int = 0,
    drop_fields: bool = True,
    transpose_first: bool = False,
    shifter: KernelT = Catrom,
    scaler: ScalerT | None = None
)

Bases: Eedi2DR, Eedi2SS, Antialiaser

Full implementation of the EEDI2 anti-aliaser

Methods:

Attributes:

  • cuda (bool) –

    Enables the use of the CUDA variant for processing.

  • device_id (int) –

    Specifies the target CUDA device.

  • drop_fields (bool) –

    Whether to discard the unused field based on the field setting.

  • dstr (int) –

    Defines the required number of edge pixels (>=) in a 3x3 area, in which the center pixel

  • estr (int) –

    Defines the required number of edge pixels (<=) in a 3x3 area, in which the center pixel

  • field (int) –

    Controls the mode of operation and which field is kept in the resized image.

  • lthresh (int) –

    Controls the Laplacian threshold used in edge detection.

  • maxd (int) –

    Sets the maximum pixel search distance for determining the interpolation direction.

  • merge_func (Callable[[VideoNode, VideoNode], ConstantFormatVideoNode]) –

    Function used to merge the clips after the double-rate operation.

  • mthresh (int) –

    Controls the edge magnitude threshold used in edge detection for building the initial edge map.

  • num_streams (int) –

    Specifies the number of CUDA streams.

  • pp (int) –

    Enables two optional post-processing modes designed to reduce artifacts by identifying problem areas

  • scaler (ScalerT | None) –

    Scaler used for additional scaling operations. If None, default to shifter

  • shifter (KernelT) –

    Kernel used for shifting operations. Default to Catrom.

  • transpose_first (bool) –

    Transpose the clip before any operation.

  • vthresh (int) –

    Controls the variance threshold used in edge detection.

cuda class-attribute instance-attribute

cuda: bool = False

Enables the use of the CUDA variant for processing. Note that full interpolating is only supported by CUDA.

device_id class-attribute instance-attribute

device_id: int = -1

Specifies the target CUDA device. The default value (-1) triggers auto-detection of the available device.

drop_fields class-attribute instance-attribute

drop_fields: bool = True

Whether to discard the unused field based on the field setting.

dstr class-attribute instance-attribute

dstr: int = 4

Defines the required number of edge pixels (>=) in a 3x3 area, in which the center pixel has not been detected as an edge pixel, for the center pixel to be added to the edge map.

estr class-attribute instance-attribute

estr: int = 2

Defines the required number of edge pixels (<=) in a 3x3 area, in which the center pixel has been detected as an edge pixel, for the center pixel to be removed from the edge map.

field class-attribute instance-attribute

field: int = 0

Controls the mode of operation and which field is kept in the resized image. - 0: Same rate, keeps the bottom field. - 1: Same rate, keeps the top field. - 2: Double rate (alternates each frame), starts with the bottom field. - 3: Double rate (alternates each frame), starts with the top field.

lthresh class-attribute instance-attribute

lthresh: int = 20

Controls the Laplacian threshold used in edge detection. Its range is from 0 to 510, with lower values detecting weaker lines.

maxd class-attribute instance-attribute

maxd: int = 24

Sets the maximum pixel search distance for determining the interpolation direction. Larger values allow the algorithm to connect edges and lines with smaller slopes but may introduce artifacts. In some cases, using a smaller maxd value can yield better results than a larger one. The maximum possible value for maxd is 29.

merge_func class-attribute instance-attribute

merge_func: Callable[[VideoNode, VideoNode], ConstantFormatVideoNode] = Merge

Function used to merge the clips after the double-rate operation.

mthresh class-attribute instance-attribute

mthresh: int = 10

Controls the edge magnitude threshold used in edge detection for building the initial edge map. Its range is from 0 to 255, with lower values detecting weaker edges.

num_streams class-attribute instance-attribute

num_streams: int = 1

Specifies the number of CUDA streams.

pp class-attribute instance-attribute

pp: int = 1

Enables two optional post-processing modes designed to reduce artifacts by identifying problem areas and applying simple vertical linear interpolation in those areas. While these modes can improve results, they may slow down processing and slightly reduce edge sharpness. - 0 = No post-processing - 1 = Check for spatial consistency of final interpolation directions - 2 = Check for junctions and corners - 3 = Apply both checks from 1 and 2

Only pp=0 and pp=1 is implemented for the CUDA variant.

scaler class-attribute instance-attribute

scaler: ScalerT | None = None

Scaler used for additional scaling operations. If None, default to shifter

shifter class-attribute instance-attribute

shifter: KernelT = Catrom

Kernel used for shifting operations. Default to Catrom.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

vthresh class-attribute instance-attribute

vthresh: int = 20

Controls the variance threshold used in edge detection. Its range is from 0 to a large number, with lower values detecting weaker edges.

aa

aa(
    clip: VideoNode, y: bool = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode
aa(
    clip: VideoNode, dir: AADirection = BOTH, /, **kwargs: Any
) -> ConstantFormatVideoNode
aa(
    clip: VideoNode,
    y_or_dir: bool | AADirection = True,
    x: bool = True,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
Source code
324
325
326
327
328
329
330
331
332
333
334
def aa(
    self, clip: vs.VideoNode, y_or_dir: bool | AADirection = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode:
    if isinstance(y_or_dir, AADirection):
        y, x = y_or_dir.to_yx()
    else:
        y = y_or_dir

    clip = self._preprocess_clip(clip)

    return self._do_aa(clip, y, x, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code
137
138
139
def copy(self, **kwargs: Any) -> Self:
    """Returns a new Antialiaser class replacing specified fields with new values"""
    return replace(self, **kwargs)

draa

draa(
    clip: VideoNode, y: bool = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode
draa(
    clip: VideoNode, dir: AADirection = BOTH, /, **kwargs: Any
) -> ConstantFormatVideoNode
draa(
    clip: VideoNode,
    y_or_dir: bool | AADirection = True,
    x: bool = True,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
Source code
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
def draa(
    self, clip: vs.VideoNode, y_or_dir: bool | AADirection = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode:
    if isinstance(y_or_dir, AADirection):
        y, x = y_or_dir.to_yx()
    else:
        y = y_or_dir

    clip = self._preprocess_clip(clip)

    original_field = int(self.field)

    self.field = 0
    aa0 = super()._do_aa(clip, y, x, **kwargs)

    self.field = 1
    aa1 = super()._do_aa(clip, y, x, **kwargs)

    self.field = original_field

    return self.merge_func(aa0, aa1)

full_interpolate

full_interpolate(
    clip: VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
150
151
152
153
154
155
156
157
158
159
160
def full_interpolate(
    self, clip: vs.VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode:
    if not all([double_y, double_x]):
        raise CustomRuntimeError(
            "`double_y` and `double_x` should be set to True to use full_interpolate!",
            self.full_interpolate,
            (double_y, double_x)
        )

    return core.eedi2cuda.Enlarge2(clip, **self.get_aa_args(clip) | kwargs)

get_aa_args

get_aa_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]
Source code
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
def get_aa_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    if (pp := self.pp) > 1 and self.cuda:
        from warnings import warn
        warn(
            f"{self.__class__.__name__}: Only `pp=0` and `pp=1` is implemented for the CUDA variant. "
            "Falling back to `pp=1`...",
            Warning
        )
        pp = 1

    args = dict(
        mthresh=self.mthresh,
        lthresh=self.lthresh,
        vthresh=self.vthresh,
        estr=self.estr,
        dstr=self.dstr,
        maxd=self.maxd,
        pp=pp
    )

    if self.cuda:
        args.update(num_streams=self.num_streams, device_id=self.device_id)

    return args | kwargs

get_sr_args

get_sr_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]

Retrieves arguments for single rating processing.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code
287
288
289
290
291
292
293
294
295
def get_sr_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    """
    Retrieves arguments for single rating processing.

    :param clip:        Source clip.
    :param **kwargs:    Additional arguments.
    :return:            Passed keyword arguments.
    """
    return kwargs

get_ss_args

get_ss_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]

Retrieves arguments for super sampling processing.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code
178
179
180
181
182
183
184
185
186
def get_ss_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    """
    Retrieves arguments for super sampling processing.

    :param clip:        Source clip.
    :param **kwargs:    Additional arguments.
    :return:            Passed keyword arguments.
    """
    return kwargs

interpolate

interpolate(
    clip: VideoNode, double_y: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
def interpolate(self, clip: vs.VideoNode, double_y: bool, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    kwargs = self.get_aa_args(clip) | kwargs

    if self.cuda:
        inter = core.eedi2cuda.EEDI2(clip, self.field, **kwargs)
    else:
        inter = core.eedi2.EEDI2(clip, self.field, **kwargs)

    if not double_y:
        if self.drop_fields:
            inter = inter.std.SeparateFields(not self.field)[::2]

            inter = self._shifter.shift(inter, (0.5 - 0.75 * self.field, 0))
        else:
            inter = self._scaler.scale(  # type: ignore[assignment]
                inter, clip.width, clip.height, (self._shift * int(not self.field), 0)
            )

    return self._post_interpolate(clip, inter, double_y)  # pyright: ignore[reportArgumentType]

is_full_interpolate_enabled

is_full_interpolate_enabled(x: bool, y: bool) -> bool
Source code
100
101
def is_full_interpolate_enabled(self, x: bool, y: bool) -> bool:
    return self.cuda and x and y

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using super sampling method.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    A tuple representing the shift values for the y and x axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the interpolate or full_interpolate methods.

Returns:

  • VideoNode

    The scaled clip.

Source code
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
@inject_self.cached
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> vs.VideoNode:
    """
    Scale the given clip using super sampling method.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the y and x axes.
    :param **kwargs:    Additional arguments to be passed to the `interpolate` or `full_interpolate` methods.

    :return:            The scaled clip.
    """
    assert check_progressive(clip, self.scale)

    clip = self._preprocess_clip(clip)
    width, height = self._wh_norm(clip, width, height)

    if (clip.width, clip.height) == (width, height):
        return clip

    kwargs = self.get_aa_args(clip, **kwargs) | self.get_ss_args(clip, **kwargs) | kwargs

    divw, divh = (ceil(size) for size in (width / clip.width, height / clip.height))

    mult_x, mult_y = (int(log2(divs)) for divs in (divw, divh))

    cdivw, cdivh = 1 << clip.format.subsampling_w, 1 << clip.format.subsampling_h

    upscaled = clip

    def _transpose(before: bool, is_width: int, y: int, x: int) -> None:
        nonlocal upscaled

        before = self.transpose_first if before else not self.transpose_first

        if ((before or not y) if is_width else (before and x)):
            upscaled = upscaled.std.Transpose()

    for (y, x) in zip_longest([True] * mult_y, [True] * mult_x, fillvalue=False):
        if isinstance(self, _FullInterpolate) and self.is_full_interpolate_enabled(x, y):
            upscaled = self.full_interpolate(upscaled, y, x, **kwargs)
        else:
            for isx, val in enumerate([y, x]):
                if val:
                    _transpose(True, isx, y, x)

                    upscaled = self.interpolate(upscaled, True, **kwargs)

                    _transpose(False, isx, y, x)

        topshift = leftshift = cleftshift = ctopshift = 0.0

        if y and self._shift:
            topshift = ctopshift = self._shift

            if cdivw == 2 and cdivh == 2:
                ctopshift -= 0.125
            elif cdivw == 1 and cdivh == 2:
                ctopshift += 0.125

        cresshift = 0.0

        if x and self._shift:
            leftshift = cleftshift = self._shift

            if cdivw in {4, 2} and cdivh in {4, 2, 1}:
                cleftshift = self._shift + 0.5

                if cdivw == 4 and cdivh == 1:
                    cresshift = 0.125 * 1
                elif cdivw == 2 and cdivh == 2:
                    cresshift = 0.125 * 2
                elif cdivw == 2 and cdivh == 1:
                    cresshift = 0.125 * 3

                cleftshift -= cresshift

        if isinstance(self._shifter, NoShift):
            if upscaled.format.subsampling_h or upscaled.format.subsampling_w:
                upscaled = Catrom.shift(upscaled, 0, [0, cleftshift + cresshift])
        else:
            upscaled = self._shifter.shift(
                upscaled, [topshift, ctopshift], [leftshift, cleftshift]
            )

    return self._scaler.scale(upscaled, width, height, shift)

shift_interpolate

shift_interpolate(
    clip: VideoNode, inter: VideoNode, double_y: bool
) -> ConstantFormatVideoNode

Applies a post-shifting interpolation operation to the interpolated clip.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • inter

    (VideoNode) –

    Interpolated clip.

  • double_y

    (bool) –

    Whether the height has been doubled

Returns:

  • ConstantFormatVideoNode

    Shifted clip.

Source code
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
def shift_interpolate(
    self,
    clip: vs.VideoNode,
    inter: vs.VideoNode,
    double_y: bool,
) -> ConstantFormatVideoNode:
    """
    Applies a post-shifting interpolation operation to the interpolated clip.

    :param clip:        Source clip.
    :param inter:       Interpolated clip.
    :param double_y:    Whether the height has been doubled
    :return:            Shifted clip.
    """
    assert check_variable(clip, self.__class__)
    assert check_variable(inter, self.__class__)

    if not double_y and not self.drop_fields:
        shift = (self._shift * int(not self.field), 0)

        inter = self._scaler.scale(inter, clip.width, clip.height, shift)

        return self._post_interpolate(clip, inter, double_y)  # type: ignore[arg-type]

    return inter

Eedi2DR dataclass

Eedi2DR(
    mthresh: int = 10,
    lthresh: int = 20,
    vthresh: int = 20,
    estr: int = 2,
    dstr: int = 4,
    maxd: int = 24,
    pp: int = 1,
    cuda: bool = False,
    num_streams: int = 1,
    device_id: int = -1,
    *,
    field: int = 0,
    drop_fields: bool = True,
    transpose_first: bool = False,
    shifter: KernelT = Catrom,
    scaler: ScalerT | None = None
)

Bases: Eedi2SR, DoubleRater

Concrete implementation of EEDI2 used as a double-rater.

Methods:

Attributes:

  • cuda (bool) –

    Enables the use of the CUDA variant for processing.

  • device_id (int) –

    Specifies the target CUDA device.

  • drop_fields (bool) –

    Whether to discard the unused field based on the field setting.

  • dstr (int) –

    Defines the required number of edge pixels (>=) in a 3x3 area, in which the center pixel

  • estr (int) –

    Defines the required number of edge pixels (<=) in a 3x3 area, in which the center pixel

  • field (int) –

    Controls the mode of operation and which field is kept in the resized image.

  • lthresh (int) –

    Controls the Laplacian threshold used in edge detection.

  • maxd (int) –

    Sets the maximum pixel search distance for determining the interpolation direction.

  • merge_func (Callable[[VideoNode, VideoNode], ConstantFormatVideoNode]) –

    Function used to merge the clips after the double-rate operation.

  • mthresh (int) –

    Controls the edge magnitude threshold used in edge detection for building the initial edge map.

  • num_streams (int) –

    Specifies the number of CUDA streams.

  • pp (int) –

    Enables two optional post-processing modes designed to reduce artifacts by identifying problem areas

  • scaler (ScalerT | None) –

    Scaler used for additional scaling operations. If None, default to shifter

  • shifter (KernelT) –

    Kernel used for shifting operations. Default to Catrom.

  • transpose_first (bool) –

    Transpose the clip before any operation.

  • vthresh (int) –

    Controls the variance threshold used in edge detection.

cuda class-attribute instance-attribute

cuda: bool = False

Enables the use of the CUDA variant for processing. Note that full interpolating is only supported by CUDA.

device_id class-attribute instance-attribute

device_id: int = -1

Specifies the target CUDA device. The default value (-1) triggers auto-detection of the available device.

drop_fields class-attribute instance-attribute

drop_fields: bool = True

Whether to discard the unused field based on the field setting.

dstr class-attribute instance-attribute

dstr: int = 4

Defines the required number of edge pixels (>=) in a 3x3 area, in which the center pixel has not been detected as an edge pixel, for the center pixel to be added to the edge map.

estr class-attribute instance-attribute

estr: int = 2

Defines the required number of edge pixels (<=) in a 3x3 area, in which the center pixel has been detected as an edge pixel, for the center pixel to be removed from the edge map.

field class-attribute instance-attribute

field: int = 0

Controls the mode of operation and which field is kept in the resized image. - 0: Same rate, keeps the bottom field. - 1: Same rate, keeps the top field. - 2: Double rate (alternates each frame), starts with the bottom field. - 3: Double rate (alternates each frame), starts with the top field.

lthresh class-attribute instance-attribute

lthresh: int = 20

Controls the Laplacian threshold used in edge detection. Its range is from 0 to 510, with lower values detecting weaker lines.

maxd class-attribute instance-attribute

maxd: int = 24

Sets the maximum pixel search distance for determining the interpolation direction. Larger values allow the algorithm to connect edges and lines with smaller slopes but may introduce artifacts. In some cases, using a smaller maxd value can yield better results than a larger one. The maximum possible value for maxd is 29.

merge_func class-attribute instance-attribute

merge_func: Callable[[VideoNode, VideoNode], ConstantFormatVideoNode] = Merge

Function used to merge the clips after the double-rate operation.

mthresh class-attribute instance-attribute

mthresh: int = 10

Controls the edge magnitude threshold used in edge detection for building the initial edge map. Its range is from 0 to 255, with lower values detecting weaker edges.

num_streams class-attribute instance-attribute

num_streams: int = 1

Specifies the number of CUDA streams.

pp class-attribute instance-attribute

pp: int = 1

Enables two optional post-processing modes designed to reduce artifacts by identifying problem areas and applying simple vertical linear interpolation in those areas. While these modes can improve results, they may slow down processing and slightly reduce edge sharpness. - 0 = No post-processing - 1 = Check for spatial consistency of final interpolation directions - 2 = Check for junctions and corners - 3 = Apply both checks from 1 and 2

Only pp=0 and pp=1 is implemented for the CUDA variant.

scaler class-attribute instance-attribute

scaler: ScalerT | None = None

Scaler used for additional scaling operations. If None, default to shifter

shifter class-attribute instance-attribute

shifter: KernelT = Catrom

Kernel used for shifting operations. Default to Catrom.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

vthresh class-attribute instance-attribute

vthresh: int = 20

Controls the variance threshold used in edge detection. Its range is from 0 to a large number, with lower values detecting weaker edges.

aa

aa(
    clip: VideoNode, y: bool = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode
aa(
    clip: VideoNode, dir: AADirection = BOTH, /, **kwargs: Any
) -> ConstantFormatVideoNode
aa(
    clip: VideoNode,
    y_or_dir: bool | AADirection = True,
    x: bool = True,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
Source code
324
325
326
327
328
329
330
331
332
333
334
def aa(
    self, clip: vs.VideoNode, y_or_dir: bool | AADirection = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode:
    if isinstance(y_or_dir, AADirection):
        y, x = y_or_dir.to_yx()
    else:
        y = y_or_dir

    clip = self._preprocess_clip(clip)

    return self._do_aa(clip, y, x, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code
137
138
139
def copy(self, **kwargs: Any) -> Self:
    """Returns a new Antialiaser class replacing specified fields with new values"""
    return replace(self, **kwargs)

draa

draa(
    clip: VideoNode, y: bool = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode
draa(
    clip: VideoNode, dir: AADirection = BOTH, /, **kwargs: Any
) -> ConstantFormatVideoNode
draa(
    clip: VideoNode,
    y_or_dir: bool | AADirection = True,
    x: bool = True,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
Source code
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
def draa(
    self, clip: vs.VideoNode, y_or_dir: bool | AADirection = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode:
    if isinstance(y_or_dir, AADirection):
        y, x = y_or_dir.to_yx()
    else:
        y = y_or_dir

    clip = self._preprocess_clip(clip)

    original_field = int(self.field)

    self.field = 0
    aa0 = super()._do_aa(clip, y, x, **kwargs)

    self.field = 1
    aa1 = super()._do_aa(clip, y, x, **kwargs)

    self.field = original_field

    return self.merge_func(aa0, aa1)

full_interpolate

full_interpolate(
    clip: VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
150
151
152
153
154
155
156
157
158
159
160
def full_interpolate(
    self, clip: vs.VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode:
    if not all([double_y, double_x]):
        raise CustomRuntimeError(
            "`double_y` and `double_x` should be set to True to use full_interpolate!",
            self.full_interpolate,
            (double_y, double_x)
        )

    return core.eedi2cuda.Enlarge2(clip, **self.get_aa_args(clip) | kwargs)

get_aa_args

get_aa_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]
Source code
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
def get_aa_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    if (pp := self.pp) > 1 and self.cuda:
        from warnings import warn
        warn(
            f"{self.__class__.__name__}: Only `pp=0` and `pp=1` is implemented for the CUDA variant. "
            "Falling back to `pp=1`...",
            Warning
        )
        pp = 1

    args = dict(
        mthresh=self.mthresh,
        lthresh=self.lthresh,
        vthresh=self.vthresh,
        estr=self.estr,
        dstr=self.dstr,
        maxd=self.maxd,
        pp=pp
    )

    if self.cuda:
        args.update(num_streams=self.num_streams, device_id=self.device_id)

    return args | kwargs

get_sr_args

get_sr_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]

Retrieves arguments for single rating processing.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code
287
288
289
290
291
292
293
294
295
def get_sr_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    """
    Retrieves arguments for single rating processing.

    :param clip:        Source clip.
    :param **kwargs:    Additional arguments.
    :return:            Passed keyword arguments.
    """
    return kwargs

interpolate

interpolate(
    clip: VideoNode, double_y: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
def interpolate(self, clip: vs.VideoNode, double_y: bool, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    kwargs = self.get_aa_args(clip) | kwargs

    if self.cuda:
        inter = core.eedi2cuda.EEDI2(clip, self.field, **kwargs)
    else:
        inter = core.eedi2.EEDI2(clip, self.field, **kwargs)

    if not double_y:
        if self.drop_fields:
            inter = inter.std.SeparateFields(not self.field)[::2]

            inter = self._shifter.shift(inter, (0.5 - 0.75 * self.field, 0))
        else:
            inter = self._scaler.scale(  # type: ignore[assignment]
                inter, clip.width, clip.height, (self._shift * int(not self.field), 0)
            )

    return self._post_interpolate(clip, inter, double_y)  # pyright: ignore[reportArgumentType]

is_full_interpolate_enabled

is_full_interpolate_enabled(x: bool, y: bool) -> bool
Source code
100
101
def is_full_interpolate_enabled(self, x: bool, y: bool) -> bool:
    return self.cuda and x and y

shift_interpolate

shift_interpolate(
    clip: VideoNode, inter: VideoNode, double_y: bool
) -> ConstantFormatVideoNode

Applies a post-shifting interpolation operation to the interpolated clip.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • inter

    (VideoNode) –

    Interpolated clip.

  • double_y

    (bool) –

    Whether the height has been doubled

Returns:

  • ConstantFormatVideoNode

    Shifted clip.

Source code
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
def shift_interpolate(
    self,
    clip: vs.VideoNode,
    inter: vs.VideoNode,
    double_y: bool,
) -> ConstantFormatVideoNode:
    """
    Applies a post-shifting interpolation operation to the interpolated clip.

    :param clip:        Source clip.
    :param inter:       Interpolated clip.
    :param double_y:    Whether the height has been doubled
    :return:            Shifted clip.
    """
    assert check_variable(clip, self.__class__)
    assert check_variable(inter, self.__class__)

    if not double_y and not self.drop_fields:
        shift = (self._shift * int(not self.field), 0)

        inter = self._scaler.scale(inter, clip.width, clip.height, shift)

        return self._post_interpolate(clip, inter, double_y)  # type: ignore[arg-type]

    return inter

Eedi2SR dataclass

Eedi2SR(
    mthresh: int = 10,
    lthresh: int = 20,
    vthresh: int = 20,
    estr: int = 2,
    dstr: int = 4,
    maxd: int = 24,
    pp: int = 1,
    cuda: bool = False,
    num_streams: int = 1,
    device_id: int = -1,
    *,
    field: int = 0,
    drop_fields: bool = True,
    transpose_first: bool = False,
    shifter: KernelT = Catrom,
    scaler: ScalerT | None = None
)

Bases: EEDI2, SingleRater

Concrete implementation of EEDI2 used as a single-rater.

Methods:

Attributes:

  • cuda (bool) –

    Enables the use of the CUDA variant for processing.

  • device_id (int) –

    Specifies the target CUDA device.

  • drop_fields (bool) –

    Whether to discard the unused field based on the field setting.

  • dstr (int) –

    Defines the required number of edge pixels (>=) in a 3x3 area, in which the center pixel

  • estr (int) –

    Defines the required number of edge pixels (<=) in a 3x3 area, in which the center pixel

  • field (int) –

    Controls the mode of operation and which field is kept in the resized image.

  • lthresh (int) –

    Controls the Laplacian threshold used in edge detection.

  • maxd (int) –

    Sets the maximum pixel search distance for determining the interpolation direction.

  • mthresh (int) –

    Controls the edge magnitude threshold used in edge detection for building the initial edge map.

  • num_streams (int) –

    Specifies the number of CUDA streams.

  • pp (int) –

    Enables two optional post-processing modes designed to reduce artifacts by identifying problem areas

  • scaler (ScalerT | None) –

    Scaler used for additional scaling operations. If None, default to shifter

  • shifter (KernelT) –

    Kernel used for shifting operations. Default to Catrom.

  • transpose_first (bool) –

    Transpose the clip before any operation.

  • vthresh (int) –

    Controls the variance threshold used in edge detection.

cuda class-attribute instance-attribute

cuda: bool = False

Enables the use of the CUDA variant for processing. Note that full interpolating is only supported by CUDA.

device_id class-attribute instance-attribute

device_id: int = -1

Specifies the target CUDA device. The default value (-1) triggers auto-detection of the available device.

drop_fields class-attribute instance-attribute

drop_fields: bool = True

Whether to discard the unused field based on the field setting.

dstr class-attribute instance-attribute

dstr: int = 4

Defines the required number of edge pixels (>=) in a 3x3 area, in which the center pixel has not been detected as an edge pixel, for the center pixel to be added to the edge map.

estr class-attribute instance-attribute

estr: int = 2

Defines the required number of edge pixels (<=) in a 3x3 area, in which the center pixel has been detected as an edge pixel, for the center pixel to be removed from the edge map.

field class-attribute instance-attribute

field: int = 0

Controls the mode of operation and which field is kept in the resized image. - 0: Same rate, keeps the bottom field. - 1: Same rate, keeps the top field. - 2: Double rate (alternates each frame), starts with the bottom field. - 3: Double rate (alternates each frame), starts with the top field.

lthresh class-attribute instance-attribute

lthresh: int = 20

Controls the Laplacian threshold used in edge detection. Its range is from 0 to 510, with lower values detecting weaker lines.

maxd class-attribute instance-attribute

maxd: int = 24

Sets the maximum pixel search distance for determining the interpolation direction. Larger values allow the algorithm to connect edges and lines with smaller slopes but may introduce artifacts. In some cases, using a smaller maxd value can yield better results than a larger one. The maximum possible value for maxd is 29.

mthresh class-attribute instance-attribute

mthresh: int = 10

Controls the edge magnitude threshold used in edge detection for building the initial edge map. Its range is from 0 to 255, with lower values detecting weaker edges.

num_streams class-attribute instance-attribute

num_streams: int = 1

Specifies the number of CUDA streams.

pp class-attribute instance-attribute

pp: int = 1

Enables two optional post-processing modes designed to reduce artifacts by identifying problem areas and applying simple vertical linear interpolation in those areas. While these modes can improve results, they may slow down processing and slightly reduce edge sharpness. - 0 = No post-processing - 1 = Check for spatial consistency of final interpolation directions - 2 = Check for junctions and corners - 3 = Apply both checks from 1 and 2

Only pp=0 and pp=1 is implemented for the CUDA variant.

scaler class-attribute instance-attribute

scaler: ScalerT | None = None

Scaler used for additional scaling operations. If None, default to shifter

shifter class-attribute instance-attribute

shifter: KernelT = Catrom

Kernel used for shifting operations. Default to Catrom.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

vthresh class-attribute instance-attribute

vthresh: int = 20

Controls the variance threshold used in edge detection. Its range is from 0 to a large number, with lower values detecting weaker edges.

aa

aa(
    clip: VideoNode, y: bool = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode
aa(
    clip: VideoNode, dir: AADirection = BOTH, /, **kwargs: Any
) -> ConstantFormatVideoNode
aa(
    clip: VideoNode,
    y_or_dir: bool | AADirection = True,
    x: bool = True,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
Source code
324
325
326
327
328
329
330
331
332
333
334
def aa(
    self, clip: vs.VideoNode, y_or_dir: bool | AADirection = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode:
    if isinstance(y_or_dir, AADirection):
        y, x = y_or_dir.to_yx()
    else:
        y = y_or_dir

    clip = self._preprocess_clip(clip)

    return self._do_aa(clip, y, x, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code
137
138
139
def copy(self, **kwargs: Any) -> Self:
    """Returns a new Antialiaser class replacing specified fields with new values"""
    return replace(self, **kwargs)

full_interpolate

full_interpolate(
    clip: VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
150
151
152
153
154
155
156
157
158
159
160
def full_interpolate(
    self, clip: vs.VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode:
    if not all([double_y, double_x]):
        raise CustomRuntimeError(
            "`double_y` and `double_x` should be set to True to use full_interpolate!",
            self.full_interpolate,
            (double_y, double_x)
        )

    return core.eedi2cuda.Enlarge2(clip, **self.get_aa_args(clip) | kwargs)

get_aa_args

get_aa_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]
Source code
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
def get_aa_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    if (pp := self.pp) > 1 and self.cuda:
        from warnings import warn
        warn(
            f"{self.__class__.__name__}: Only `pp=0` and `pp=1` is implemented for the CUDA variant. "
            "Falling back to `pp=1`...",
            Warning
        )
        pp = 1

    args = dict(
        mthresh=self.mthresh,
        lthresh=self.lthresh,
        vthresh=self.vthresh,
        estr=self.estr,
        dstr=self.dstr,
        maxd=self.maxd,
        pp=pp
    )

    if self.cuda:
        args.update(num_streams=self.num_streams, device_id=self.device_id)

    return args | kwargs

get_sr_args

get_sr_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]

Retrieves arguments for single rating processing.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code
287
288
289
290
291
292
293
294
295
def get_sr_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    """
    Retrieves arguments for single rating processing.

    :param clip:        Source clip.
    :param **kwargs:    Additional arguments.
    :return:            Passed keyword arguments.
    """
    return kwargs

interpolate

interpolate(
    clip: VideoNode, double_y: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
def interpolate(self, clip: vs.VideoNode, double_y: bool, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    kwargs = self.get_aa_args(clip) | kwargs

    if self.cuda:
        inter = core.eedi2cuda.EEDI2(clip, self.field, **kwargs)
    else:
        inter = core.eedi2.EEDI2(clip, self.field, **kwargs)

    if not double_y:
        if self.drop_fields:
            inter = inter.std.SeparateFields(not self.field)[::2]

            inter = self._shifter.shift(inter, (0.5 - 0.75 * self.field, 0))
        else:
            inter = self._scaler.scale(  # type: ignore[assignment]
                inter, clip.width, clip.height, (self._shift * int(not self.field), 0)
            )

    return self._post_interpolate(clip, inter, double_y)  # pyright: ignore[reportArgumentType]

is_full_interpolate_enabled

is_full_interpolate_enabled(x: bool, y: bool) -> bool
Source code
100
101
def is_full_interpolate_enabled(self, x: bool, y: bool) -> bool:
    return self.cuda and x and y

shift_interpolate

shift_interpolate(
    clip: VideoNode, inter: VideoNode, double_y: bool
) -> ConstantFormatVideoNode

Applies a post-shifting interpolation operation to the interpolated clip.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • inter

    (VideoNode) –

    Interpolated clip.

  • double_y

    (bool) –

    Whether the height has been doubled

Returns:

  • ConstantFormatVideoNode

    Shifted clip.

Source code
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
def shift_interpolate(
    self,
    clip: vs.VideoNode,
    inter: vs.VideoNode,
    double_y: bool,
) -> ConstantFormatVideoNode:
    """
    Applies a post-shifting interpolation operation to the interpolated clip.

    :param clip:        Source clip.
    :param inter:       Interpolated clip.
    :param double_y:    Whether the height has been doubled
    :return:            Shifted clip.
    """
    assert check_variable(clip, self.__class__)
    assert check_variable(inter, self.__class__)

    if not double_y and not self.drop_fields:
        shift = (self._shift * int(not self.field), 0)

        inter = self._scaler.scale(inter, clip.width, clip.height, shift)

        return self._post_interpolate(clip, inter, double_y)  # type: ignore[arg-type]

    return inter

Eedi2SS dataclass

Eedi2SS(
    mthresh: int = 10,
    lthresh: int = 20,
    vthresh: int = 20,
    estr: int = 2,
    dstr: int = 4,
    maxd: int = 24,
    pp: int = 1,
    cuda: bool = False,
    num_streams: int = 1,
    device_id: int = -1,
    *,
    field: int = 0,
    drop_fields: bool = True,
    transpose_first: bool = False,
    shifter: KernelT = Catrom,
    scaler: ScalerT | None = None
)

Bases: EEDI2, SuperSampler

Concrete implementation of EEDI2 used as a supersampler.

Methods:

Attributes:

  • cuda (bool) –

    Enables the use of the CUDA variant for processing.

  • device_id (int) –

    Specifies the target CUDA device.

  • drop_fields (bool) –

    Whether to discard the unused field based on the field setting.

  • dstr (int) –

    Defines the required number of edge pixels (>=) in a 3x3 area, in which the center pixel

  • estr (int) –

    Defines the required number of edge pixels (<=) in a 3x3 area, in which the center pixel

  • field (int) –

    Controls the mode of operation and which field is kept in the resized image.

  • lthresh (int) –

    Controls the Laplacian threshold used in edge detection.

  • maxd (int) –

    Sets the maximum pixel search distance for determining the interpolation direction.

  • mthresh (int) –

    Controls the edge magnitude threshold used in edge detection for building the initial edge map.

  • num_streams (int) –

    Specifies the number of CUDA streams.

  • pp (int) –

    Enables two optional post-processing modes designed to reduce artifacts by identifying problem areas

  • scaler (ScalerT | None) –

    Scaler used for additional scaling operations. If None, default to shifter

  • shifter (KernelT) –

    Kernel used for shifting operations. Default to Catrom.

  • transpose_first (bool) –

    Transpose the clip before any operation.

  • vthresh (int) –

    Controls the variance threshold used in edge detection.

cuda class-attribute instance-attribute

cuda: bool = False

Enables the use of the CUDA variant for processing. Note that full interpolating is only supported by CUDA.

device_id class-attribute instance-attribute

device_id: int = -1

Specifies the target CUDA device. The default value (-1) triggers auto-detection of the available device.

drop_fields class-attribute instance-attribute

drop_fields: bool = True

Whether to discard the unused field based on the field setting.

dstr class-attribute instance-attribute

dstr: int = 4

Defines the required number of edge pixels (>=) in a 3x3 area, in which the center pixel has not been detected as an edge pixel, for the center pixel to be added to the edge map.

estr class-attribute instance-attribute

estr: int = 2

Defines the required number of edge pixels (<=) in a 3x3 area, in which the center pixel has been detected as an edge pixel, for the center pixel to be removed from the edge map.

field class-attribute instance-attribute

field: int = 0

Controls the mode of operation and which field is kept in the resized image. - 0: Same rate, keeps the bottom field. - 1: Same rate, keeps the top field. - 2: Double rate (alternates each frame), starts with the bottom field. - 3: Double rate (alternates each frame), starts with the top field.

lthresh class-attribute instance-attribute

lthresh: int = 20

Controls the Laplacian threshold used in edge detection. Its range is from 0 to 510, with lower values detecting weaker lines.

maxd class-attribute instance-attribute

maxd: int = 24

Sets the maximum pixel search distance for determining the interpolation direction. Larger values allow the algorithm to connect edges and lines with smaller slopes but may introduce artifacts. In some cases, using a smaller maxd value can yield better results than a larger one. The maximum possible value for maxd is 29.

mthresh class-attribute instance-attribute

mthresh: int = 10

Controls the edge magnitude threshold used in edge detection for building the initial edge map. Its range is from 0 to 255, with lower values detecting weaker edges.

num_streams class-attribute instance-attribute

num_streams: int = 1

Specifies the number of CUDA streams.

pp class-attribute instance-attribute

pp: int = 1

Enables two optional post-processing modes designed to reduce artifacts by identifying problem areas and applying simple vertical linear interpolation in those areas. While these modes can improve results, they may slow down processing and slightly reduce edge sharpness. - 0 = No post-processing - 1 = Check for spatial consistency of final interpolation directions - 2 = Check for junctions and corners - 3 = Apply both checks from 1 and 2

Only pp=0 and pp=1 is implemented for the CUDA variant.

scaler class-attribute instance-attribute

scaler: ScalerT | None = None

Scaler used for additional scaling operations. If None, default to shifter

shifter class-attribute instance-attribute

shifter: KernelT = Catrom

Kernel used for shifting operations. Default to Catrom.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

vthresh class-attribute instance-attribute

vthresh: int = 20

Controls the variance threshold used in edge detection. Its range is from 0 to a large number, with lower values detecting weaker edges.

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code
137
138
139
def copy(self, **kwargs: Any) -> Self:
    """Returns a new Antialiaser class replacing specified fields with new values"""
    return replace(self, **kwargs)

full_interpolate

full_interpolate(
    clip: VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
150
151
152
153
154
155
156
157
158
159
160
def full_interpolate(
    self, clip: vs.VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode:
    if not all([double_y, double_x]):
        raise CustomRuntimeError(
            "`double_y` and `double_x` should be set to True to use full_interpolate!",
            self.full_interpolate,
            (double_y, double_x)
        )

    return core.eedi2cuda.Enlarge2(clip, **self.get_aa_args(clip) | kwargs)

get_aa_args

get_aa_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]
Source code
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
def get_aa_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    if (pp := self.pp) > 1 and self.cuda:
        from warnings import warn
        warn(
            f"{self.__class__.__name__}: Only `pp=0` and `pp=1` is implemented for the CUDA variant. "
            "Falling back to `pp=1`...",
            Warning
        )
        pp = 1

    args = dict(
        mthresh=self.mthresh,
        lthresh=self.lthresh,
        vthresh=self.vthresh,
        estr=self.estr,
        dstr=self.dstr,
        maxd=self.maxd,
        pp=pp
    )

    if self.cuda:
        args.update(num_streams=self.num_streams, device_id=self.device_id)

    return args | kwargs

get_ss_args

get_ss_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]

Retrieves arguments for super sampling processing.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code
178
179
180
181
182
183
184
185
186
def get_ss_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    """
    Retrieves arguments for super sampling processing.

    :param clip:        Source clip.
    :param **kwargs:    Additional arguments.
    :return:            Passed keyword arguments.
    """
    return kwargs

interpolate

interpolate(
    clip: VideoNode, double_y: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
def interpolate(self, clip: vs.VideoNode, double_y: bool, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    kwargs = self.get_aa_args(clip) | kwargs

    if self.cuda:
        inter = core.eedi2cuda.EEDI2(clip, self.field, **kwargs)
    else:
        inter = core.eedi2.EEDI2(clip, self.field, **kwargs)

    if not double_y:
        if self.drop_fields:
            inter = inter.std.SeparateFields(not self.field)[::2]

            inter = self._shifter.shift(inter, (0.5 - 0.75 * self.field, 0))
        else:
            inter = self._scaler.scale(  # type: ignore[assignment]
                inter, clip.width, clip.height, (self._shift * int(not self.field), 0)
            )

    return self._post_interpolate(clip, inter, double_y)  # pyright: ignore[reportArgumentType]

is_full_interpolate_enabled

is_full_interpolate_enabled(x: bool, y: bool) -> bool
Source code
100
101
def is_full_interpolate_enabled(self, x: bool, y: bool) -> bool:
    return self.cuda and x and y

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using super sampling method.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    A tuple representing the shift values for the y and x axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the interpolate or full_interpolate methods.

Returns:

  • VideoNode

    The scaled clip.

Source code
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
@inject_self.cached
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> vs.VideoNode:
    """
    Scale the given clip using super sampling method.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the y and x axes.
    :param **kwargs:    Additional arguments to be passed to the `interpolate` or `full_interpolate` methods.

    :return:            The scaled clip.
    """
    assert check_progressive(clip, self.scale)

    clip = self._preprocess_clip(clip)
    width, height = self._wh_norm(clip, width, height)

    if (clip.width, clip.height) == (width, height):
        return clip

    kwargs = self.get_aa_args(clip, **kwargs) | self.get_ss_args(clip, **kwargs) | kwargs

    divw, divh = (ceil(size) for size in (width / clip.width, height / clip.height))

    mult_x, mult_y = (int(log2(divs)) for divs in (divw, divh))

    cdivw, cdivh = 1 << clip.format.subsampling_w, 1 << clip.format.subsampling_h

    upscaled = clip

    def _transpose(before: bool, is_width: int, y: int, x: int) -> None:
        nonlocal upscaled

        before = self.transpose_first if before else not self.transpose_first

        if ((before or not y) if is_width else (before and x)):
            upscaled = upscaled.std.Transpose()

    for (y, x) in zip_longest([True] * mult_y, [True] * mult_x, fillvalue=False):
        if isinstance(self, _FullInterpolate) and self.is_full_interpolate_enabled(x, y):
            upscaled = self.full_interpolate(upscaled, y, x, **kwargs)
        else:
            for isx, val in enumerate([y, x]):
                if val:
                    _transpose(True, isx, y, x)

                    upscaled = self.interpolate(upscaled, True, **kwargs)

                    _transpose(False, isx, y, x)

        topshift = leftshift = cleftshift = ctopshift = 0.0

        if y and self._shift:
            topshift = ctopshift = self._shift

            if cdivw == 2 and cdivh == 2:
                ctopshift -= 0.125
            elif cdivw == 1 and cdivh == 2:
                ctopshift += 0.125

        cresshift = 0.0

        if x and self._shift:
            leftshift = cleftshift = self._shift

            if cdivw in {4, 2} and cdivh in {4, 2, 1}:
                cleftshift = self._shift + 0.5

                if cdivw == 4 and cdivh == 1:
                    cresshift = 0.125 * 1
                elif cdivw == 2 and cdivh == 2:
                    cresshift = 0.125 * 2
                elif cdivw == 2 and cdivh == 1:
                    cresshift = 0.125 * 3

                cleftshift -= cresshift

        if isinstance(self._shifter, NoShift):
            if upscaled.format.subsampling_h or upscaled.format.subsampling_w:
                upscaled = Catrom.shift(upscaled, 0, [0, cleftshift + cresshift])
        else:
            upscaled = self._shifter.shift(
                upscaled, [topshift, ctopshift], [leftshift, cleftshift]
            )

    return self._scaler.scale(upscaled, width, height, shift)

shift_interpolate

shift_interpolate(
    clip: VideoNode, inter: VideoNode, double_y: bool
) -> ConstantFormatVideoNode

Applies a post-shifting interpolation operation to the interpolated clip.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • inter

    (VideoNode) –

    Interpolated clip.

  • double_y

    (bool) –

    Whether the height has been doubled

Returns:

  • ConstantFormatVideoNode

    Shifted clip.

Source code
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
def shift_interpolate(
    self,
    clip: vs.VideoNode,
    inter: vs.VideoNode,
    double_y: bool,
) -> ConstantFormatVideoNode:
    """
    Applies a post-shifting interpolation operation to the interpolated clip.

    :param clip:        Source clip.
    :param inter:       Interpolated clip.
    :param double_y:    Whether the height has been doubled
    :return:            Shifted clip.
    """
    assert check_variable(clip, self.__class__)
    assert check_variable(inter, self.__class__)

    if not double_y and not self.drop_fields:
        shift = (self._shift * int(not self.field), 0)

        inter = self._scaler.scale(inter, clip.width, clip.height, shift)

        return self._post_interpolate(clip, inter, double_y)  # type: ignore[arg-type]

    return inter