Skip to content

types

Type Aliases:

  • BotFieldLeftShift

    Type alias for the bottom field's horizontal shift in pixels.

  • BotFieldTopShift

    Type alias for the bottom field's vertical shift in pixels.

  • Center

    Type alias for the center point of the sigmoid curve, determining the midpoint of the transition.

  • FieldShift

    Type alias for shifts in interlaced content.

  • LeftShift

    Type alias for horizontal shift in pixels (left).

  • Slope

    Type alias for the slope of the sigmoid curve, controlling the steepness of the transition.

  • TopFieldLeftShift

    Type alias for the top field's horizontal shift in pixels.

  • TopFieldTopShift

    Type alias for the top field's vertical shift in pixels.

  • TopShift

    Type alias for vertical shift in pixels (top).

Classes:

BotFieldLeftShift

BotFieldLeftShift = float

Type alias for the bottom field's horizontal shift in pixels.

Used when processing interlaced video to describe the horizontal shift of the bottom field.

BotFieldTopShift

BotFieldTopShift = float

Type alias for the bottom field's vertical shift in pixels.

Used when processing interlaced video to describe the vertical shift of the bottom field.

Center

Center = float

Type alias for the center point of the sigmoid curve, determining the midpoint of the transition.

FieldShift

Type alias for shifts in interlaced content.

Represents separate shifts for top and bottom fields.

LeftShift

LeftShift = float

Type alias for horizontal shift in pixels (left).

Represents the amount of horizontal offset when scaling a video.

Slope

Slope = float

Type alias for the slope of the sigmoid curve, controlling the steepness of the transition.

TopFieldLeftShift

TopFieldLeftShift = float

Type alias for the top field's horizontal shift in pixels.

Used when processing interlaced video to describe the horizontal shift of the top field.

TopFieldTopShift

TopFieldTopShift = float

Type alias for the top field's vertical shift in pixels.

Used when processing interlaced video to describe the vertical shift of the top field.

TopShift

TopShift = float

Type alias for vertical shift in pixels (top).

Represents the amount of vertical offset when scaling a video.

BorderHandling

Bases: CustomIntEnum

Border padding strategy used when a clip requires alignment padding.

Methods:

Attributes:

  • MIRROR

    Assume the image was resized with mirror padding.

  • REPEAT

    Assume the image was resized with extend padding, where the outermost row was extended infinitely far.

  • ZERO

    Assume the image was resized with zero padding.

MIRROR class-attribute instance-attribute

MIRROR = 0

Assume the image was resized with mirror padding.

REPEAT class-attribute instance-attribute

REPEAT = 2

Assume the image was resized with extend padding, where the outermost row was extended infinitely far.

ZERO class-attribute instance-attribute

ZERO = 1

Assume the image was resized with zero padding.

from_param classmethod

from_param(value: Any, func_except: FuncExcept | None = None) -> Self

Return the enum value from a parameter.

Parameters:

  • value

    (Any) –

    Value to instantiate the enum class.

  • func_except

    (FuncExcept | None, default: None ) –

    Exception function.

Returns:

  • Self

    Enum value.

Raises:

Source code in jetpytools/enums/base.py
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
@classmethod
def from_param(cls, value: Any, func_except: FuncExcept | None = None) -> Self:
    """
    Return the enum value from a parameter.

    Args:
        value: Value to instantiate the enum class.
        func_except: Exception function.

    Returns:
        Enum value.

    Raises:
        NotFoundEnumValue: Variable not found in the given enum.
    """
    func_except = func_except or cls.from_param

    try:
        return cls(value)
    except (ValueError, TypeError):
        pass

    if isinstance(func_except, tuple):
        func_name, var_name = func_except
    else:
        func_name, var_name = func_except, repr(cls)

    raise NotFoundEnumValueError(
        'The given value for "{var_name}" argument must be a valid {enum_name}, not "{value}"!\n'
        "Valid values are: [{readable_enum}].",
        func_name,
        var_name=var_name,
        enum_name=cls,
        value=value,
        readable_enum=(f"{name} ({value!r})" for name, value in cls.__members__.items()),
        reason=value,
    ) from None

pad_amount

Return required padding.

Parameters:

  • clip

    (VideoNode) –

    Input clip.

  • width

    (int) –

    Output width.

  • height

    (int) –

    Output height.

  • shift

    (tuple[TopShift, LeftShift]) –

    Current (top, left) shift.

  • kernel_radius

    (int) –

    Kernel radius.

  • src_width

    (float) –

    Width source region.

  • src_height

    (float) –

    Height source region.

Returns:

Source code in vskernels/types.py
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
def pad_amount(
    self,
    clip: vs.VideoNode,
    width: int,
    height: int,
    shift: tuple[TopShift, LeftShift],
    kernel_radius: int,
    src_width: float,
    src_height: float,
) -> tuple[int, int, int, int]:
    """
    Return required padding.

    Args:
        clip: Input clip.
        width: Output width.
        height: Output height.
        shift: Current (top, left) shift.
        kernel_radius: Kernel radius.
        src_width: Width source region.
        src_height: Height source region.

    Returns:
        Padding amount.
    """
    top_shift, left_shift = shift

    w_factor = kernel_radius * max(src_width / width, 1)
    left, right = (
        ceil((w_factor - left_shift) / 2) * 2**clip.format.subsampling_w,
        ceil((w_factor + left_shift) / 2) * 2**clip.format.subsampling_w,
    )

    h_factor = kernel_radius * max(src_height / height, 1)
    top, bottom = (
        ceil((h_factor - top_shift) / 2) * 2**clip.format.subsampling_h,
        ceil((h_factor + top_shift) / 2) * 2**clip.format.subsampling_h,
    )

    return (left, right, top, bottom)

prepare_clip

prepare_clip(
    clip: VideoNode,
    width: int,
    height: int,
    shift: tuple[TopShift, LeftShift],
    kernel_radius: int,
    **kwargs: Any
) -> tuple[VideoNode, tuple[TopShift, LeftShift]]

Apply required padding and adjust shift.

Parameters:

  • clip

    (VideoNode) –

    Input clip.

  • width

    (int) –

    Output width.

  • height

    (int) –

    Output height.

  • shift

    (tuple[TopShift, LeftShift]) –

    Current (top, left) shift.

  • kernel_radius

    (int) –

    Kernel radius.

  • **kwargs

    (Any, default: {} ) –

    Optional src_width/src_height.

Returns:

Source code in vskernels/types.py
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
def prepare_clip(
    self,
    clip: vs.VideoNode,
    width: int,
    height: int,
    shift: tuple[TopShift, LeftShift],
    kernel_radius: int,
    **kwargs: Any,
) -> tuple[vs.VideoNode, tuple[TopShift, LeftShift]]:
    """
    Apply required padding and adjust shift.

    Args:
        clip: Input clip.
        width: Output width.
        height: Output height.
        shift: Current (top, left) shift.
        kernel_radius: Kernel radius.
        **kwargs: Optional src_width/src_height.

    Returns:
        (padded clip, updated shift).
    """

    if self is BorderHandling.MIRROR:
        return (clip, shift)

    src_width = fallback(kwargs.get("src_width"), clip.width)
    src_height = fallback(kwargs.get("src_height"), clip.height)

    shift = kwargs.pop("src_stop", shift[0]), kwargs.pop("src_left", shift[1])

    left, right, top, bottom = self.pad_amount(
        clip,
        width,
        height,
        shift,
        kernel_radius,
        src_width,
        src_height,
    )

    match self:
        case BorderHandling.ZERO:
            padded = padder.COLOR(clip, left, right, top, bottom)
        case BorderHandling.REPEAT:
            padded = padder.REPEAT(clip, left, right, top, bottom)
        case _:
            assert_never(self)

    shift = tuple(s + c for s, c in zip(shift, (top, left)))  # type: ignore

    return padded, shift

value

value() -> int
Source code in jetpytools/enums/base.py
86
87
@enum_property
def value(self) -> int: ...

SampleGridModel

Bases: CustomIntEnum

Sampling grid alignment model.

While match edges will align the edges of the outermost pixels in the target image, match centers will instead align the centers of the outermost pixels.

Here's a visual example for a 3x1 image upsampled to 7x1:

  • Match edges:

    +-------------+-------------+-------------+
    |      ·      |      ·      |      ·      |
    +-------------+-------------+-------------+
    ↓                                         ↓
    +-----+-----+-----+-----+-----+-----+-----+
    |  ·  |  ·  |  ·  |  ·  |  ·  |  ·  |  ·  |
    +-----+-----+-----+-----+-----+-----+-----+
    

  • Match centers:

    +-----------------+-----------------+-----------------+
    |        ·        |        ·        |        ·        |
    +-----------------+-----------------+-----------------+
             ↓                                   ↓
          +-----+-----+-----+-----+-----+-----+-----+
          |  ·  |  ·  |  ·  |  ·  |  ·  |  ·  |  ·  |
          +-----+-----+-----+-----+-----+-----+-----+
    

For a more detailed explanation, refer to this page: https://entropymine.com/imageworsener/matching/.

The formula for calculating values we can use during desampling is simple:

  • width: base_width * (target_width - 1) / (base_width - 1)
  • height: base_height * (target_height - 1) / (base_height - 1)

Methods:

  • __call__

    Apply sampling model to sizes and shift.

  • for_dst

    Apply grid model using destination sizes.

  • for_src

    Apply grid model using source sizes.

  • from_param

    Return the enum value from a parameter.

  • value

Attributes:

MATCH_CENTERS class-attribute instance-attribute

MATCH_CENTERS = 1

Align pixel centers.

MATCH_EDGES class-attribute instance-attribute

MATCH_EDGES = 0

Align edges.

__call__

Apply sampling model to sizes and shift.

Parameters:

  • width

    (float) –

    Destination width.

  • height

    (float) –

    Destination height.

  • src_width

    (float) –

    Current source width.

  • src_height

    (float) –

    Current source height.

  • shift

    (tuple[float, float]) –

    Top, left sampling shift.

  • kwargs

    (dict[str, Any]) –

    Parameter dict to update.

Returns:

Source code in vskernels/types.py
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
def __call__(
    self,
    width: float,
    height: float,
    src_width: float,
    src_height: float,
    shift: tuple[float, float],
    kwargs: dict[str, Any],
) -> tuple[dict[str, Any], tuple[float, float]]:
    """
    Apply sampling model to sizes and shift.

    Args:
        width: Destination width.
        height: Destination height.
        src_width: Current source width.
        src_height: Current source height.
        shift: Top, left sampling shift.
        kwargs: Parameter dict to update.

    Returns:
        (updated kwargs, updated shift).
    """

    if self is SampleGridModel.MATCH_CENTERS:
        src_width = src_width * (width - 1) / (src_width - 1)
        src_height = src_height * (height - 1) / (src_height - 1)

        shift = kwargs.pop("src_stop", shift[0]), kwargs.pop("src_left", shift[1])

        kwargs.update(src_width=src_width, src_height=src_height)
        shift_x, shift_y, *_ = tuple(
            (x / 2 + sh for x, sh in zip(((height - src_height), (width - src_width)), shift))
        )
        shift = shift_x, shift_y

    return kwargs, shift

for_dst

for_dst(
    clip: VideoNode,
    width: int,
    height: int,
    shift: tuple[float, float],
    **kwargs: Any
) -> tuple[dict[str, Any], tuple[float, float]]

Apply grid model using destination sizes.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • width

    (int) –

    Destination width.

  • height

    (int) –

    Destination height.

  • shift

    (tuple[float, float]) –

    Current shift.

  • **kwargs

    (Any, default: {} ) –

    Optional src_width/src_height.

Returns:

Source code in vskernels/types.py
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
def for_dst(
    self, clip: vs.VideoNode, width: int, height: int, shift: tuple[float, float], **kwargs: Any
) -> tuple[dict[str, Any], tuple[float, float]]:
    """
    Apply grid model using destination sizes.

    Args:
        clip: Source clip.
        width: Destination width.
        height: Destination height.
        shift: Current shift.
        **kwargs: Optional src_width/src_height.

    Returns:
        (updated kwargs, updated shift).
    """

    src_width = fallback(kwargs.get("src_width"), width)
    src_height = fallback(kwargs.get("src_height"), height)

    return self(src_width, src_height, width, height, shift, kwargs)

for_src

for_src(
    clip: VideoNode,
    width: int,
    height: int,
    shift: tuple[float, float],
    **kwargs: Any
) -> tuple[dict[str, Any], tuple[float, float]]

Apply grid model using source sizes.

Parameters:

  • clip

    (VideoNode) –

    Source clip (fallback for src dimensions).

  • width

    (int) –

    Source width.

  • height

    (int) –

    Source height.

  • shift

    (tuple[float, float]) –

    Current shift.

  • **kwargs

    (Any, default: {} ) –

    Optional overrides.

Returns:

Source code in vskernels/types.py
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
def for_src(
    self, clip: vs.VideoNode, width: int, height: int, shift: tuple[float, float], **kwargs: Any
) -> tuple[dict[str, Any], tuple[float, float]]:
    """
    Apply grid model using source sizes.

    Args:
        clip: Source clip (fallback for src dimensions).
        width: Source width.
        height: Source height.
        shift: Current shift.
        **kwargs: Optional overrides.

    Returns:
        (updated kwargs, updated shift).
    """

    src_width = fallback(kwargs.get("src_width"), clip.width)
    src_height = fallback(kwargs.get("src_height"), clip.height)

    return self(width, height, src_width, src_height, shift, kwargs)

from_param classmethod

from_param(value: Any, func_except: FuncExcept | None = None) -> Self

Return the enum value from a parameter.

Parameters:

  • value

    (Any) –

    Value to instantiate the enum class.

  • func_except

    (FuncExcept | None, default: None ) –

    Exception function.

Returns:

  • Self

    Enum value.

Raises:

Source code in jetpytools/enums/base.py
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
@classmethod
def from_param(cls, value: Any, func_except: FuncExcept | None = None) -> Self:
    """
    Return the enum value from a parameter.

    Args:
        value: Value to instantiate the enum class.
        func_except: Exception function.

    Returns:
        Enum value.

    Raises:
        NotFoundEnumValue: Variable not found in the given enum.
    """
    func_except = func_except or cls.from_param

    try:
        return cls(value)
    except (ValueError, TypeError):
        pass

    if isinstance(func_except, tuple):
        func_name, var_name = func_except
    else:
        func_name, var_name = func_except, repr(cls)

    raise NotFoundEnumValueError(
        'The given value for "{var_name}" argument must be a valid {enum_name}, not "{value}"!\n'
        "Valid values are: [{readable_enum}].",
        func_name,
        var_name=var_name,
        enum_name=cls,
        value=value,
        readable_enum=(f"{name} ({value!r})" for name, value in cls.__members__.items()),
        reason=value,
    ) from None

value

value() -> int
Source code in jetpytools/enums/base.py
86
87
@enum_property
def value(self) -> int: ...