types ¶
Type Aliases:
-
BotFieldLeftShift–Type alias for the bottom field's horizontal shift in pixels.
-
BotFieldTopShift–Type alias for the bottom field's vertical shift in pixels.
-
Center–Type alias for the center point of the sigmoid curve, determining the midpoint of the transition.
-
FieldShift–Type alias for shifts in interlaced content.
-
LeftShift–Type alias for horizontal shift in pixels (left).
-
Slope–Type alias for the slope of the sigmoid curve, controlling the steepness of the transition.
-
TopFieldLeftShift–Type alias for the top field's horizontal shift in pixels.
-
TopFieldTopShift–Type alias for the top field's vertical shift in pixels.
-
TopShift–Type alias for vertical shift in pixels (top).
Classes:
-
BorderHandling–Border padding strategy used when a clip requires alignment padding.
-
SampleGridModel–Sampling grid alignment model.
BotFieldLeftShift ¶
BotFieldLeftShift = float
Type alias for the bottom field's horizontal shift in pixels.
Used when processing interlaced video to describe the horizontal shift of the bottom field.
BotFieldTopShift ¶
BotFieldTopShift = float
Type alias for the bottom field's vertical shift in pixels.
Used when processing interlaced video to describe the vertical shift of the bottom field.
Center ¶
Center = float
Type alias for the center point of the sigmoid curve, determining the midpoint of the transition.
FieldShift ¶
FieldShift = tuple[
TopShift | tuple[TopFieldTopShift, BotFieldTopShift],
LeftShift | tuple[TopFieldLeftShift, BotFieldLeftShift],
]
Type alias for shifts in interlaced content.
Represents separate shifts for top and bottom fields.
LeftShift ¶
LeftShift = float
Type alias for horizontal shift in pixels (left).
Represents the amount of horizontal offset when scaling a video.
Slope ¶
Slope = float
Type alias for the slope of the sigmoid curve, controlling the steepness of the transition.
TopFieldLeftShift ¶
TopFieldLeftShift = float
Type alias for the top field's horizontal shift in pixels.
Used when processing interlaced video to describe the horizontal shift of the top field.
TopFieldTopShift ¶
TopFieldTopShift = float
Type alias for the top field's vertical shift in pixels.
Used when processing interlaced video to describe the vertical shift of the top field.
TopShift ¶
TopShift = float
Type alias for vertical shift in pixels (top).
Represents the amount of vertical offset when scaling a video.
BorderHandling ¶
Bases: CustomIntEnum
Border padding strategy used when a clip requires alignment padding.
Methods:
-
from_param–Return the enum value from a parameter.
-
pad_amount–Return required padding for one dimension.
-
prepare_clip–Apply required padding and adjust shift.
-
value–
Attributes:
-
MIRROR–Assume the image was resized with mirror padding.
-
REPEAT–Assume the image was resized with extend padding, where the outermost row was extended infinitely far.
-
ZERO–Assume the image was resized with zero padding.
MIRROR class-attribute instance-attribute ¶
MIRROR = 0
Assume the image was resized with mirror padding.
REPEAT class-attribute instance-attribute ¶
REPEAT = 2
Assume the image was resized with extend padding, where the outermost row was extended infinitely far.
from_param classmethod ¶
from_param(value: Any, func_except: FuncExcept | None = None) -> Self
Return the enum value from a parameter.
Parameters:
-
(value¶Any) –Value to instantiate the enum class.
-
(func_except¶FuncExcept | None, default:None) –Exception function.
Returns:
-
Self–Enum value.
Raises:
-
NotFoundEnumValue–Variable not found in the given enum.
Source code in jetpytools/enums/base.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | |
pad_amount cached ¶
pad_amount(size: int, min_amount: int = 2) -> int
Return required padding for one dimension.
MIRROR always returns zero. Other modes pad to an 8-pixel boundary.
Parameters:
Returns:
-
int–Padding amount.
Source code in vskernels/types.py
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 | |
prepare_clip ¶
prepare_clip(
clip: VideoNode,
min_pad: int = 2,
shift: tuple[TopShift, LeftShift] = (0, 0),
) -> tuple[VideoNode, tuple[TopShift, LeftShift]]
Apply required padding and adjust shift.
Parameters:
-
(clip¶VideoNode) –Input clip.
-
(min_pad¶int, default:2) –Minimum padding before alignment.
-
(shift¶tuple[TopShift, LeftShift], default:(0, 0)) –Current (top, left) shift.
Returns:
Source code in vskernels/types.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | |
SampleGridModel ¶
Bases: CustomIntEnum
Sampling grid alignment model.
While match edges will align the edges of the outermost pixels in the target image, match centers will instead align the centers of the outermost pixels.
Here's a visual example for a 3x1 image upsampled to 7x1:
-
Match edges:
+-------------+-------------+-------------+ | · | · | · | +-------------+-------------+-------------+ ↓ ↓ +-----+-----+-----+-----+-----+-----+-----+ | · | · | · | · | · | · | · | +-----+-----+-----+-----+-----+-----+-----+ -
Match centers:
+-----------------+-----------------+-----------------+ | · | · | · | +-----------------+-----------------+-----------------+ ↓ ↓ +-----+-----+-----+-----+-----+-----+-----+ | · | · | · | · | · | · | · | +-----+-----+-----+-----+-----+-----+-----+
For a more detailed explanation, refer to this page: https://entropymine.com/imageworsener/matching/.
The formula for calculating values we can use during desampling is simple:
- width:
base_width * (target_width - 1) / (base_width - 1) - height:
base_height * (target_height - 1) / (base_height - 1)
Methods:
-
__call__–Apply sampling model to sizes and shift.
-
for_dst–Apply grid model using destination sizes.
-
for_src–Apply grid model using source sizes.
-
from_param–Return the enum value from a parameter.
-
value–
Attributes:
-
MATCH_CENTERS–Align pixel centers.
-
MATCH_EDGES–Align edges.
__call__ ¶
__call__(
width: int,
height: int,
src_width: float,
src_height: float,
shift: tuple[float, float],
kwargs: dict[str, Any],
) -> tuple[dict[str, Any], tuple[float, float]]
Apply sampling model to sizes and shift.
Parameters:
-
(width¶int) –Destination width.
-
(height¶int) –Destination height.
-
(src_width¶float) –Current source width.
-
(src_height¶float) –Current source height.
-
(shift¶tuple[float, float]) –(x, y) sampling shift.
-
(kwargs¶dict[str, Any]) –Parameter dict to update.
Returns:
Source code in vskernels/types.py
135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 | |
for_dst ¶
for_dst(
clip: VideoNode,
width: int,
height: int,
shift: tuple[float, float],
**kwargs: Any
) -> tuple[dict[str, Any], tuple[float, float]]
Apply grid model using destination sizes.
Parameters:
-
(clip¶VideoNode) –Source clip.
-
(width¶int) –Destination width.
-
(height¶int) –Destination height.
-
(shift¶tuple[float, float]) –Current shift.
-
(**kwargs¶Any, default:{}) –Optional src_width/src_height.
Returns:
Source code in vskernels/types.py
171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 | |
for_src ¶
for_src(
clip: VideoNode,
width: int,
height: int,
shift: tuple[float, float],
**kwargs: Any
) -> tuple[dict[str, Any], tuple[float, float]]
Apply grid model using source sizes.
Parameters:
-
(clip¶VideoNode) –Source clip (fallback for src dimensions).
-
(width¶int) –Source width.
-
(height¶int) –Source height.
-
(shift¶tuple[float, float]) –Current shift.
-
(**kwargs¶Any, default:{}) –Optional overrides.
Returns:
Source code in vskernels/types.py
193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | |
from_param classmethod ¶
from_param(value: Any, func_except: FuncExcept | None = None) -> Self
Return the enum value from a parameter.
Parameters:
-
(value¶Any) –Value to instantiate the enum class.
-
(func_except¶FuncExcept | None, default:None) –Exception function.
Returns:
-
Self–Enum value.
Raises:
-
NotFoundEnumValue–Variable not found in the given enum.
Source code in jetpytools/enums/base.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | |