Skip to content

clips

Classes:

Functions:

ProcessVariableClip

ProcessVariableClip(
    clip: VideoNode,
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
)

Bases: DynamicClipsCache[T]

A helper class for processing variable format/resolution clip.

Parameters:

  • clip

    (VideoNode) –

    Clip to process

  • out_dim

    (tuple[int, int] | Literal[False] | None, default: None ) –

    Ouput dimension.

  • out_fmt

    (int | VideoFormat | Literal[False] | None, default: None ) –

    Output format.

  • cache_size

    (int, default: 10 ) –

    The maximum number of items allowed in the cache. Defaults to 10.

Methods:

  • eval_clip
  • from_clip

    Process a variable format/resolution clip.

  • from_func

    Process a variable format/resolution clip with a given function

  • get_clip
  • get_key

    Generate a unique key based on the node or frame.

  • normalize

    Normalize the given node to the format/resolution specified by the unique key cast_to.

  • process

    Process the given clip.

Attributes:

Source code in vstools/functions/clips.py
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def __init__(
    self,
    clip: vs.VideoNode,
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | vs.VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
) -> None:
    """
    Args:
        clip: Clip to process
        out_dim: Ouput dimension.
        out_fmt: Output format.
        cache_size: The maximum number of items allowed in the cache. Defaults to 10.
    """
    bk_args = KwargsT(length=clip.num_frames, keep=True, varformat=None)

    if out_dim is None:
        out_dim = (clip.width, clip.height)

    if out_fmt is None:
        out_fmt = clip.format or False

    if out_dim is not False and 0 in out_dim:
        out_dim = False

    if out_dim is False:
        bk_args.update(width=8, height=8, varsize=True)
    else:
        bk_args.update(width=out_dim[0], height=out_dim[1])

    if out_fmt is False:
        bk_args.update(format=vs.GRAY8, varformat=True)
    else:
        bk_args.update(format=out_fmt if isinstance(out_fmt, int) else out_fmt.id)

    super().__init__(cache_size)

    self.clip = clip
    self.out = vs.core.std.BlankClip(clip, **bk_args)

cache_size instance-attribute

cache_size = cache_size

clip instance-attribute

clip = clip

out instance-attribute

out = BlankClip(clip, **bk_args)

eval_clip

eval_clip() -> VideoNode
Source code in vstools/functions/clips.py
91
92
93
94
95
96
97
98
def eval_clip(self) -> vs.VideoNode:
    if self.out.format and (0 not in (self.out.width, self.out.height)):
        try:
            return self.get_clip(self.get_key(self.clip))
        except Exception:
            ...

    return vs.core.std.FrameEval(self.out, lambda n, f: self[self.get_key(f)], self.clip)

from_clip classmethod

from_clip(clip: VideoNode) -> VideoNode

Process a variable format/resolution clip.

Parameters:

  • clip

    (VideoNode) –

    Clip to process.

Returns:

  • VideoNode

    Processed clip.

Source code in vstools/functions/clips.py
103
104
105
106
107
108
109
110
111
112
113
114
@classmethod
def from_clip(cls, clip: vs.VideoNode) -> vs.VideoNode:
    """
    Process a variable format/resolution clip.

    Args:
        clip: Clip to process.

    Returns:
        Processed clip.
    """
    return cls(clip).eval_clip()

from_func classmethod

from_func(
    clip: VideoNode,
    func: Callable[[VideoNode], VideoNode],
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
) -> VideoNode

Process a variable format/resolution clip with a given function

Parameters:

  • clip

    (VideoNode) –

    Clip to process.

  • func

    (Callable[[VideoNode], VideoNode]) –

    Function that takes and returns a single VideoNode.

  • out_dim

    (tuple[int, int] | Literal[False] | None, default: None ) –

    Ouput dimension.

  • out_fmt

    (int | VideoFormat | Literal[False] | None, default: None ) –

    Output format.

  • cache_size

    (int, default: 10 ) –

    The maximum number of VideoNode allowed in the cache. Defaults to 10.

Returns:

  • VideoNode

    Processed variable clip.

Source code in vstools/functions/clips.py
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
@classmethod
def from_func(
    cls,
    clip: vs.VideoNode,
    func: Callable[[vs.VideoNode], vs.VideoNode],
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | vs.VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
) -> vs.VideoNode:
    """
    Process a variable format/resolution clip with a given function

    Args:
        clip: Clip to process.
        func: Function that takes and returns a single VideoNode.
        out_dim: Ouput dimension.
        out_fmt: Output format.
        cache_size: The maximum number of VideoNode allowed in the cache. Defaults to 10.

    Returns:
        Processed variable clip.
    """

    def process(self: ProcessVariableClip[T], clip: vs.VideoNode) -> vs.VideoNode:
        return func(clip)

    ns = cls.__dict__.copy()
    ns[process.__name__] = process

    return type(cls.__name__, cls.__bases__, ns)(clip, out_dim, out_fmt, cache_size).eval_clip()

get_clip

get_clip(key: T) -> VideoNode
Source code in vstools/functions/clips.py
100
101
def get_clip(self, key: T) -> vs.VideoNode:
    return self.process(self.normalize(self.clip, key))

get_key abstractmethod

get_key(frame: VideoNode | VideoFrame) -> T

Generate a unique key based on the node or frame. This key will be used to temporarily assert a resolution and format for the clip to process.

Parameters:

  • frame

    (VideoNode | VideoFrame) –

    Node or frame from which the unique key is generated.

Returns:

  • T

    Unique identifier.

Source code in vstools/functions/clips.py
147
148
149
150
151
152
153
154
155
156
157
158
@abstractmethod
def get_key(self, frame: vs.VideoNode | vs.VideoFrame) -> T:
    """
    Generate a unique key based on the node or frame.
    This key will be used to temporarily assert a resolution and format for the clip to process.

    Args:
        frame: Node or frame from which the unique key is generated.

    Returns:
        Unique identifier.
    """

normalize abstractmethod

normalize(clip: VideoNode, cast_to: T) -> VideoNode

Normalize the given node to the format/resolution specified by the unique key cast_to.

Parameters:

  • clip

    (VideoNode) –

    Clip to normalize.

  • cast_to

    (T) –

    The target resolution or format to which the clip should be cast or normalized.

Returns:

  • VideoNode

    Normalized clip.

Source code in vstools/functions/clips.py
160
161
162
163
164
165
166
167
168
169
170
171
@abstractmethod
def normalize(self, clip: vs.VideoNode, cast_to: T) -> vs.VideoNode:
    """
    Normalize the given node to the format/resolution specified by the unique key `cast_to`.

    Args:
        clip: Clip to normalize.
        cast_to: The target resolution or format to which the clip should be cast or normalized.

    Returns:
        Normalized clip.
    """

process

process(clip: VideoNode) -> VideoNode

Process the given clip.

Parameters:

  • clip

    (VideoNode) –

    Clip to process.

Returns:

  • VideoNode

    Processed clip.

Source code in vstools/functions/clips.py
173
174
175
176
177
178
179
180
181
182
183
def process(self, clip: vs.VideoNode) -> vs.VideoNode:
    """
    Process the given clip.

    Args:
        clip: Clip to process.

    Returns:
        Processed clip.
    """
    return clip

ProcessVariableFormatClip

ProcessVariableFormatClip(
    clip: VideoNode,
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
)

Bases: ProcessVariableClip[VideoFormat]

A helper class for processing variable format clip.

Parameters:

  • clip

    (VideoNode) –

    Clip to process

  • out_dim

    (tuple[int, int] | Literal[False] | None, default: None ) –

    Ouput dimension.

  • out_fmt

    (int | VideoFormat | Literal[False] | None, default: None ) –

    Output format.

  • cache_size

    (int, default: 10 ) –

    The maximum number of items allowed in the cache. Defaults to 10.

Methods:

Attributes:

Source code in vstools/functions/clips.py
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def __init__(
    self,
    clip: vs.VideoNode,
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | vs.VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
) -> None:
    """
    Args:
        clip: Clip to process
        out_dim: Ouput dimension.
        out_fmt: Output format.
        cache_size: The maximum number of items allowed in the cache. Defaults to 10.
    """
    bk_args = KwargsT(length=clip.num_frames, keep=True, varformat=None)

    if out_dim is None:
        out_dim = (clip.width, clip.height)

    if out_fmt is None:
        out_fmt = clip.format or False

    if out_dim is not False and 0 in out_dim:
        out_dim = False

    if out_dim is False:
        bk_args.update(width=8, height=8, varsize=True)
    else:
        bk_args.update(width=out_dim[0], height=out_dim[1])

    if out_fmt is False:
        bk_args.update(format=vs.GRAY8, varformat=True)
    else:
        bk_args.update(format=out_fmt if isinstance(out_fmt, int) else out_fmt.id)

    super().__init__(cache_size)

    self.clip = clip
    self.out = vs.core.std.BlankClip(clip, **bk_args)

cache_size instance-attribute

cache_size = cache_size

clip instance-attribute

clip = clip

out instance-attribute

out = BlankClip(clip, **bk_args)

eval_clip

eval_clip() -> VideoNode
Source code in vstools/functions/clips.py
91
92
93
94
95
96
97
98
def eval_clip(self) -> vs.VideoNode:
    if self.out.format and (0 not in (self.out.width, self.out.height)):
        try:
            return self.get_clip(self.get_key(self.clip))
        except Exception:
            ...

    return vs.core.std.FrameEval(self.out, lambda n, f: self[self.get_key(f)], self.clip)

from_clip classmethod

from_clip(clip: VideoNode) -> VideoNode

Process a variable format/resolution clip.

Parameters:

  • clip

    (VideoNode) –

    Clip to process.

Returns:

  • VideoNode

    Processed clip.

Source code in vstools/functions/clips.py
103
104
105
106
107
108
109
110
111
112
113
114
@classmethod
def from_clip(cls, clip: vs.VideoNode) -> vs.VideoNode:
    """
    Process a variable format/resolution clip.

    Args:
        clip: Clip to process.

    Returns:
        Processed clip.
    """
    return cls(clip).eval_clip()

from_func classmethod

from_func(
    clip: VideoNode,
    func: Callable[[VideoNode], VideoNode],
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
) -> VideoNode

Process a variable format/resolution clip with a given function

Parameters:

  • clip

    (VideoNode) –

    Clip to process.

  • func

    (Callable[[VideoNode], VideoNode]) –

    Function that takes and returns a single VideoNode.

  • out_dim

    (tuple[int, int] | Literal[False] | None, default: None ) –

    Ouput dimension.

  • out_fmt

    (int | VideoFormat | Literal[False] | None, default: None ) –

    Output format.

  • cache_size

    (int, default: 10 ) –

    The maximum number of VideoNode allowed in the cache. Defaults to 10.

Returns:

  • VideoNode

    Processed variable clip.

Source code in vstools/functions/clips.py
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
@classmethod
def from_func(
    cls,
    clip: vs.VideoNode,
    func: Callable[[vs.VideoNode], vs.VideoNode],
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | vs.VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
) -> vs.VideoNode:
    """
    Process a variable format/resolution clip with a given function

    Args:
        clip: Clip to process.
        func: Function that takes and returns a single VideoNode.
        out_dim: Ouput dimension.
        out_fmt: Output format.
        cache_size: The maximum number of VideoNode allowed in the cache. Defaults to 10.

    Returns:
        Processed variable clip.
    """

    def process(self: ProcessVariableClip[T], clip: vs.VideoNode) -> vs.VideoNode:
        return func(clip)

    ns = cls.__dict__.copy()
    ns[process.__name__] = process

    return type(cls.__name__, cls.__bases__, ns)(clip, out_dim, out_fmt, cache_size).eval_clip()

get_clip

get_clip(key: T) -> VideoNode
Source code in vstools/functions/clips.py
100
101
def get_clip(self, key: T) -> vs.VideoNode:
    return self.process(self.normalize(self.clip, key))

get_key

get_key(frame: VideoNode | VideoFrame) -> VideoFormat
Source code in vstools/functions/clips.py
204
205
206
def get_key(self, frame: vs.VideoNode | vs.VideoFrame) -> vs.VideoFormat:
    assert frame.format
    return frame.format

normalize

normalize(clip: VideoNode, cast_to: VideoFormat) -> VideoNode
Source code in vstools/functions/clips.py
208
209
210
def normalize(self, clip: vs.VideoNode, cast_to: vs.VideoFormat) -> vs.VideoNode:
    normalized = vs.core.resize.Point(vs.core.std.RemoveFrameProps(clip), format=cast_to.id)
    return vs.core.std.CopyFrameProps(normalized, clip)

process

process(clip: VideoNode) -> VideoNode

Process the given clip.

Parameters:

  • clip

    (VideoNode) –

    Clip to process.

Returns:

  • VideoNode

    Processed clip.

Source code in vstools/functions/clips.py
173
174
175
176
177
178
179
180
181
182
183
def process(self, clip: vs.VideoNode) -> vs.VideoNode:
    """
    Process the given clip.

    Args:
        clip: Clip to process.

    Returns:
        Processed clip.
    """
    return clip

ProcessVariableResClip

ProcessVariableResClip(
    clip: VideoNode,
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
)

Bases: ProcessVariableClip[tuple[int, int]]

A helper class for processing variable resolution clip.

Parameters:

  • clip

    (VideoNode) –

    Clip to process

  • out_dim

    (tuple[int, int] | Literal[False] | None, default: None ) –

    Ouput dimension.

  • out_fmt

    (int | VideoFormat | Literal[False] | None, default: None ) –

    Output format.

  • cache_size

    (int, default: 10 ) –

    The maximum number of items allowed in the cache. Defaults to 10.

Methods:

Attributes:

Source code in vstools/functions/clips.py
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def __init__(
    self,
    clip: vs.VideoNode,
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | vs.VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
) -> None:
    """
    Args:
        clip: Clip to process
        out_dim: Ouput dimension.
        out_fmt: Output format.
        cache_size: The maximum number of items allowed in the cache. Defaults to 10.
    """
    bk_args = KwargsT(length=clip.num_frames, keep=True, varformat=None)

    if out_dim is None:
        out_dim = (clip.width, clip.height)

    if out_fmt is None:
        out_fmt = clip.format or False

    if out_dim is not False and 0 in out_dim:
        out_dim = False

    if out_dim is False:
        bk_args.update(width=8, height=8, varsize=True)
    else:
        bk_args.update(width=out_dim[0], height=out_dim[1])

    if out_fmt is False:
        bk_args.update(format=vs.GRAY8, varformat=True)
    else:
        bk_args.update(format=out_fmt if isinstance(out_fmt, int) else out_fmt.id)

    super().__init__(cache_size)

    self.clip = clip
    self.out = vs.core.std.BlankClip(clip, **bk_args)

cache_size instance-attribute

cache_size = cache_size

clip instance-attribute

clip = clip

out instance-attribute

out = BlankClip(clip, **bk_args)

eval_clip

eval_clip() -> VideoNode
Source code in vstools/functions/clips.py
91
92
93
94
95
96
97
98
def eval_clip(self) -> vs.VideoNode:
    if self.out.format and (0 not in (self.out.width, self.out.height)):
        try:
            return self.get_clip(self.get_key(self.clip))
        except Exception:
            ...

    return vs.core.std.FrameEval(self.out, lambda n, f: self[self.get_key(f)], self.clip)

from_clip classmethod

from_clip(clip: VideoNode) -> VideoNode

Process a variable format/resolution clip.

Parameters:

  • clip

    (VideoNode) –

    Clip to process.

Returns:

  • VideoNode

    Processed clip.

Source code in vstools/functions/clips.py
103
104
105
106
107
108
109
110
111
112
113
114
@classmethod
def from_clip(cls, clip: vs.VideoNode) -> vs.VideoNode:
    """
    Process a variable format/resolution clip.

    Args:
        clip: Clip to process.

    Returns:
        Processed clip.
    """
    return cls(clip).eval_clip()

from_func classmethod

from_func(
    clip: VideoNode,
    func: Callable[[VideoNode], VideoNode],
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
) -> VideoNode

Process a variable format/resolution clip with a given function

Parameters:

  • clip

    (VideoNode) –

    Clip to process.

  • func

    (Callable[[VideoNode], VideoNode]) –

    Function that takes and returns a single VideoNode.

  • out_dim

    (tuple[int, int] | Literal[False] | None, default: None ) –

    Ouput dimension.

  • out_fmt

    (int | VideoFormat | Literal[False] | None, default: None ) –

    Output format.

  • cache_size

    (int, default: 10 ) –

    The maximum number of VideoNode allowed in the cache. Defaults to 10.

Returns:

  • VideoNode

    Processed variable clip.

Source code in vstools/functions/clips.py
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
@classmethod
def from_func(
    cls,
    clip: vs.VideoNode,
    func: Callable[[vs.VideoNode], vs.VideoNode],
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | vs.VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
) -> vs.VideoNode:
    """
    Process a variable format/resolution clip with a given function

    Args:
        clip: Clip to process.
        func: Function that takes and returns a single VideoNode.
        out_dim: Ouput dimension.
        out_fmt: Output format.
        cache_size: The maximum number of VideoNode allowed in the cache. Defaults to 10.

    Returns:
        Processed variable clip.
    """

    def process(self: ProcessVariableClip[T], clip: vs.VideoNode) -> vs.VideoNode:
        return func(clip)

    ns = cls.__dict__.copy()
    ns[process.__name__] = process

    return type(cls.__name__, cls.__bases__, ns)(clip, out_dim, out_fmt, cache_size).eval_clip()

get_clip

get_clip(key: T) -> VideoNode
Source code in vstools/functions/clips.py
100
101
def get_clip(self, key: T) -> vs.VideoNode:
    return self.process(self.normalize(self.clip, key))

get_key

get_key(frame: VideoNode | VideoFrame) -> tuple[int, int]
Source code in vstools/functions/clips.py
191
192
def get_key(self, frame: vs.VideoNode | vs.VideoFrame) -> tuple[int, int]:
    return (frame.width, frame.height)

normalize

normalize(clip: VideoNode, cast_to: tuple[int, int]) -> VideoNode
Source code in vstools/functions/clips.py
194
195
196
def normalize(self, clip: vs.VideoNode, cast_to: tuple[int, int]) -> vs.VideoNode:
    normalized = vs.core.resize.Point(vs.core.std.RemoveFrameProps(clip), *cast_to)
    return vs.core.std.CopyFrameProps(normalized, clip)

process

process(clip: VideoNode) -> VideoNode

Process the given clip.

Parameters:

  • clip

    (VideoNode) –

    Clip to process.

Returns:

  • VideoNode

    Processed clip.

Source code in vstools/functions/clips.py
173
174
175
176
177
178
179
180
181
182
183
def process(self, clip: vs.VideoNode) -> vs.VideoNode:
    """
    Process the given clip.

    Args:
        clip: Clip to process.

    Returns:
        Processed clip.
    """
    return clip

ProcessVariableResFormatClip

ProcessVariableResFormatClip(
    clip: VideoNode,
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
)

Bases: ProcessVariableClip[tuple[int, int, VideoFormat]]

A helper class for processing variable format and resolution clip.

Parameters:

  • clip

    (VideoNode) –

    Clip to process

  • out_dim

    (tuple[int, int] | Literal[False] | None, default: None ) –

    Ouput dimension.

  • out_fmt

    (int | VideoFormat | Literal[False] | None, default: None ) –

    Output format.

  • cache_size

    (int, default: 10 ) –

    The maximum number of items allowed in the cache. Defaults to 10.

Methods:

Attributes:

Source code in vstools/functions/clips.py
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def __init__(
    self,
    clip: vs.VideoNode,
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | vs.VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
) -> None:
    """
    Args:
        clip: Clip to process
        out_dim: Ouput dimension.
        out_fmt: Output format.
        cache_size: The maximum number of items allowed in the cache. Defaults to 10.
    """
    bk_args = KwargsT(length=clip.num_frames, keep=True, varformat=None)

    if out_dim is None:
        out_dim = (clip.width, clip.height)

    if out_fmt is None:
        out_fmt = clip.format or False

    if out_dim is not False and 0 in out_dim:
        out_dim = False

    if out_dim is False:
        bk_args.update(width=8, height=8, varsize=True)
    else:
        bk_args.update(width=out_dim[0], height=out_dim[1])

    if out_fmt is False:
        bk_args.update(format=vs.GRAY8, varformat=True)
    else:
        bk_args.update(format=out_fmt if isinstance(out_fmt, int) else out_fmt.id)

    super().__init__(cache_size)

    self.clip = clip
    self.out = vs.core.std.BlankClip(clip, **bk_args)

cache_size instance-attribute

cache_size = cache_size

clip instance-attribute

clip = clip

out instance-attribute

out = BlankClip(clip, **bk_args)

eval_clip

eval_clip() -> VideoNode
Source code in vstools/functions/clips.py
91
92
93
94
95
96
97
98
def eval_clip(self) -> vs.VideoNode:
    if self.out.format and (0 not in (self.out.width, self.out.height)):
        try:
            return self.get_clip(self.get_key(self.clip))
        except Exception:
            ...

    return vs.core.std.FrameEval(self.out, lambda n, f: self[self.get_key(f)], self.clip)

from_clip classmethod

from_clip(clip: VideoNode) -> VideoNode

Process a variable format/resolution clip.

Parameters:

  • clip

    (VideoNode) –

    Clip to process.

Returns:

  • VideoNode

    Processed clip.

Source code in vstools/functions/clips.py
103
104
105
106
107
108
109
110
111
112
113
114
@classmethod
def from_clip(cls, clip: vs.VideoNode) -> vs.VideoNode:
    """
    Process a variable format/resolution clip.

    Args:
        clip: Clip to process.

    Returns:
        Processed clip.
    """
    return cls(clip).eval_clip()

from_func classmethod

from_func(
    clip: VideoNode,
    func: Callable[[VideoNode], VideoNode],
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
) -> VideoNode

Process a variable format/resolution clip with a given function

Parameters:

  • clip

    (VideoNode) –

    Clip to process.

  • func

    (Callable[[VideoNode], VideoNode]) –

    Function that takes and returns a single VideoNode.

  • out_dim

    (tuple[int, int] | Literal[False] | None, default: None ) –

    Ouput dimension.

  • out_fmt

    (int | VideoFormat | Literal[False] | None, default: None ) –

    Output format.

  • cache_size

    (int, default: 10 ) –

    The maximum number of VideoNode allowed in the cache. Defaults to 10.

Returns:

  • VideoNode

    Processed variable clip.

Source code in vstools/functions/clips.py
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
@classmethod
def from_func(
    cls,
    clip: vs.VideoNode,
    func: Callable[[vs.VideoNode], vs.VideoNode],
    out_dim: tuple[int, int] | Literal[False] | None = None,
    out_fmt: int | vs.VideoFormat | Literal[False] | None = None,
    cache_size: int = 10,
) -> vs.VideoNode:
    """
    Process a variable format/resolution clip with a given function

    Args:
        clip: Clip to process.
        func: Function that takes and returns a single VideoNode.
        out_dim: Ouput dimension.
        out_fmt: Output format.
        cache_size: The maximum number of VideoNode allowed in the cache. Defaults to 10.

    Returns:
        Processed variable clip.
    """

    def process(self: ProcessVariableClip[T], clip: vs.VideoNode) -> vs.VideoNode:
        return func(clip)

    ns = cls.__dict__.copy()
    ns[process.__name__] = process

    return type(cls.__name__, cls.__bases__, ns)(clip, out_dim, out_fmt, cache_size).eval_clip()

get_clip

get_clip(key: T) -> VideoNode
Source code in vstools/functions/clips.py
100
101
def get_clip(self, key: T) -> vs.VideoNode:
    return self.process(self.normalize(self.clip, key))

get_key

get_key(frame: VideoNode | VideoFrame) -> tuple[int, int, VideoFormat]
Source code in vstools/functions/clips.py
218
219
220
def get_key(self, frame: vs.VideoNode | vs.VideoFrame) -> tuple[int, int, vs.VideoFormat]:
    assert frame.format
    return (frame.width, frame.height, frame.format)

normalize

normalize(clip: VideoNode, cast_to: tuple[int, int, VideoFormat]) -> VideoNode
Source code in vstools/functions/clips.py
222
223
224
225
226
227
def normalize(self, clip: vs.VideoNode, cast_to: tuple[int, int, vs.VideoFormat]) -> vs.VideoNode:
    w, h, fmt = cast_to

    normalized = vs.core.resize.Point(vs.core.std.RemoveFrameProps(clip), w, h, fmt.id)

    return vs.core.std.CopyFrameProps(normalized, clip)

process

process(clip: VideoNode) -> VideoNode

Process the given clip.

Parameters:

  • clip

    (VideoNode) –

    Clip to process.

Returns:

  • VideoNode

    Processed clip.

Source code in vstools/functions/clips.py
173
174
175
176
177
178
179
180
181
182
183
def process(self, clip: vs.VideoNode) -> vs.VideoNode:
    """
    Process the given clip.

    Args:
        clip: Clip to process.

    Returns:
        Processed clip.
    """
    return clip

finalize_clip

finalize_clip(
    clip: VideoNode,
    bits: VideoFormatLike | HoldsVideoFormat | int | None = 10,
    clamp_tv_range: bool = False,
    dither_type: DitherType = AUTO,
    *,
    func: FuncExcept | None = None
) -> VideoNode

Finalize a clip for output to the encoder.

Parameters:

  • clip

    (VideoNode) –

    Clip to output.

  • bits

    (VideoFormatLike | HoldsVideoFormat | int | None, default: 10 ) –

    Bitdepth to output to.

  • clamp_tv_range

    (bool, default: False ) –

    Whether to clamp to tv range.

  • dither_type

    (DitherType, default: AUTO ) –

    Dithering used for the bitdepth conversion.

  • func

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling. This should only be set by VS package developers.

Returns:

  • VideoNode

    Dithered down and optionally clamped clip.

Source code in vstools/functions/clips.py
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
def finalize_clip(
    clip: vs.VideoNode,
    bits: VideoFormatLike | HoldsVideoFormat | int | None = 10,
    clamp_tv_range: bool = False,
    dither_type: DitherType = DitherType.AUTO,
    *,
    func: FuncExcept | None = None,
) -> vs.VideoNode:
    """
    Finalize a clip for output to the encoder.

    Args:
        clip: Clip to output.
        bits: Bitdepth to output to.
        clamp_tv_range: Whether to clamp to tv range.
        dither_type: Dithering used for the bitdepth conversion.
        func: Function returned for custom error handling. This should only be set by VS package developers.

    Returns:
        Dithered down and optionally clamped clip.
    """
    from ..functions import limiter

    if bits:
        clip = depth(clip, bits, dither_type=dither_type)

    if clamp_tv_range:
        clip = limiter(clip, tv_range=clamp_tv_range)

    return clip

finalize_output

finalize_output(
    function: Callable[P, VideoNode],
    /,
    *,
    bits: int | None = 10,
    clamp_tv_range: bool = False,
    dither_type: DitherType = AUTO,
    func: FuncExcept | None = None,
) -> Callable[P, VideoNode]
finalize_output(
    *,
    bits: int | None = 10,
    clamp_tv_range: bool = False,
    dither_type: DitherType = AUTO,
    func: FuncExcept | None = None
) -> Callable[[Callable[P, VideoNode]], Callable[P, VideoNode]]
finalize_output(
    function: Callable[P, VideoNode] | None = None,
    /,
    *,
    bits: int | None = 10,
    clamp_tv_range: bool = False,
    dither_type: DitherType = AUTO,
    func: FuncExcept | None = None,
) -> (
    Callable[P, VideoNode]
    | Callable[[Callable[P, VideoNode]], Callable[P, VideoNode]]
)

Decorator implementation of finalize_clip.

Source code in vstools/functions/clips.py
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
def finalize_output[**P](
    function: Callable[P, vs.VideoNode] | None = None,
    /,
    *,
    bits: int | None = 10,
    clamp_tv_range: bool = False,
    dither_type: DitherType = DitherType.AUTO,
    func: FuncExcept | None = None,
) -> Callable[P, vs.VideoNode] | Callable[[Callable[P, vs.VideoNode]], Callable[P, vs.VideoNode]]:
    """
    Decorator implementation of [finalize_clip][vstools.finalize_clip].
    """

    if function is None:
        return partial(finalize_output, bits=bits, clamp_tv_range=clamp_tv_range, dither_type=dither_type, func=func)

    @wraps(function)
    def _wrapper(*args: P.args, **kwargs: P.kwargs) -> vs.VideoNode:
        return finalize_clip(function(*args, **kwargs), bits, clamp_tv_range, dither_type, func=func)

    return _wrapper

initialize_clip

initialize_clip(
    clip: VideoNode,
    bits: int | None = None,
    matrix: MatrixLike | None = None,
    transfer: TransferLike | None = None,
    primaries: PrimariesLike | None = None,
    chroma_location: ChromaLocationLike | None = None,
    color_range: ColorRangeLike | None = None,
    field_based: FieldBasedLike | None = None,
    strict: bool = False,
    dither_type: DitherType = AUTO,
    *,
    func: FuncExcept | None = None
) -> VideoNode

Initialize a clip with default props.

It is HIGHLY recommended to always use this function at the beginning of your scripts!

Parameters:

  • clip

    (VideoNode) –

    Clip to initialize.

  • bits

    (int | None, default: None ) –

    Bits to dither to.

    • If 0, no dithering is applied.
    • If None, 16 if bit depth is lower than it, else leave untouched.
    • If positive integer, dither to that bitdepth.
  • matrix

    (MatrixLike | None, default: None ) –

    Matrix property to set. If None, tries to get the Matrix from existing props. If no props are set or Matrix=2, guess from the video resolution.

  • transfer

    (TransferLike | None, default: None ) –

    Transfer property to set. If None, tries to get the Transfer from existing props. If no props are set or Transfer=2, guess from the video resolution.

  • primaries

    (PrimariesLike | None, default: None ) –

    Primaries property to set. If None, tries to get the Primaries from existing props. If no props are set or Primaries=2, guess from the video resolution.

  • chroma_location

    (ChromaLocationLike | None, default: None ) –

    ChromaLocation prop to set. If None, tries to get the ChromaLocation from existing props. If no props are set, guess from the video resolution.

  • color_range

    (ColorRangeLike | None, default: None ) –

    ColorRange prop to set. If None, tries to get the ColorRange from existing props. If no props are set, assume Limited Range.

  • field_based

    (FieldBasedLike | None, default: None ) –

    FieldBased prop to set. If None, tries to get the FieldBased from existing props. If no props are set, assume PROGRESSIVE.

  • strict

    (bool, default: False ) –

    Whether to be strict about existing properties. If True, throws an exception if certain frame properties are not found.

  • dither_type

    (DitherType, default: AUTO ) –

    Dithering used for the bitdepth conversion.

  • func

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling. This should only be set by VS package developers.

Returns:

  • VideoNode

    Clip with relevant frame properties set, and optionally dithered up to 16 bits by default.

Source code in vstools/functions/clips.py
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
def initialize_clip(
    clip: vs.VideoNode,
    bits: int | None = None,
    matrix: MatrixLike | None = None,
    transfer: TransferLike | None = None,
    primaries: PrimariesLike | None = None,
    chroma_location: ChromaLocationLike | None = None,
    color_range: ColorRangeLike | None = None,
    field_based: FieldBasedLike | None = None,
    strict: bool = False,
    dither_type: DitherType = DitherType.AUTO,
    *,
    func: FuncExcept | None = None,
) -> vs.VideoNode:
    """
    Initialize a clip with default props.

    It is HIGHLY recommended to always use this function at the beginning of your scripts!

    Args:
        clip: Clip to initialize.
        bits: Bits to dither to.

               - If 0, no dithering is applied.
               - If None, 16 if bit depth is lower than it, else leave untouched.
               - If positive integer, dither to that bitdepth.

        matrix: Matrix property to set. If None, tries to get the Matrix from existing props. If no props are set or
            Matrix=2, guess from the video resolution.
        transfer: Transfer property to set. If None, tries to get the Transfer from existing props. If no props are set
            or Transfer=2, guess from the video resolution.
        primaries: Primaries property to set. If None, tries to get the Primaries from existing props. If no props are
            set or Primaries=2, guess from the video resolution.
        chroma_location: ChromaLocation prop to set. If None, tries to get the ChromaLocation from existing props. If no
            props are set, guess from the video resolution.
        color_range: ColorRange prop to set. If None, tries to get the ColorRange from existing props. If no props are
            set, assume Limited Range.
        field_based: FieldBased prop to set. If None, tries to get the FieldBased from existing props. If no props are
            set, assume PROGRESSIVE.
        strict: Whether to be strict about existing properties. If True, throws an exception if certain frame properties
            are not found.
        dither_type: Dithering used for the bitdepth conversion.
        func: Function returned for custom error handling. This should only be set by VS package developers.

    Returns:
        Clip with relevant frame properties set, and optionally dithered up to 16 bits by default.
    """
    func = func or initialize_clip

    values: list[tuple[type[PropEnum], Any]] = [
        (Matrix, matrix),
        (Transfer, transfer),
        (Primaries, primaries),
        (ChromaLocation, chroma_location),
        (ColorRange, color_range),
        (FieldBased, field_based),
    ]

    to_ensure_presence = list[type[PropEnum] | PropEnum]()

    for prop_t, prop_v in values:
        if strict:
            to_ensure_presence.append(prop_t)
        else:
            p = prop_t.from_param(prop_v, func)

            if p is None:
                to_ensure_presence.append(prop_t.from_video(clip, False, func))
            else:
                to_ensure_presence.append(p)

    clip = PropEnum.ensure_presences(clip, to_ensure_presence, func)

    if bits is None:
        bits = max(get_depth(clip), 16)
    elif bits <= 0:
        return clip

    return depth(clip, bits, dither_type=dither_type)

initialize_input

initialize_input(
    function: Callable[P, VideoNode],
    /,
    *,
    bits: int | None = 16,
    matrix: MatrixLike | None = None,
    transfer: TransferLike | None = None,
    primaries: PrimariesLike | None = None,
    chroma_location: ChromaLocationLike | None = None,
    color_range: ColorRangeLike | None = None,
    field_based: FieldBasedLike | None = None,
    strict: bool = False,
    dither_type: DitherType = AUTO,
    func: FuncExcept | None = None,
) -> Callable[P, VideoNode]
initialize_input(
    *,
    bits: int | None = 16,
    matrix: MatrixLike | None = None,
    transfer: TransferLike | None = None,
    primaries: PrimariesLike | None = None,
    chroma_location: ChromaLocationLike | None = None,
    color_range: ColorRangeLike | None = None,
    field_based: FieldBasedLike | None = None,
    dither_type: DitherType = AUTO,
    func: FuncExcept | None = None
) -> Callable[[Callable[P, VideoNode]], Callable[P, VideoNode]]
initialize_input(
    function: Callable[P, VideoNode] | None = None,
    /,
    *,
    bits: int | None = 16,
    matrix: MatrixLike | None = None,
    transfer: TransferLike | None = None,
    primaries: PrimariesLike | None = None,
    chroma_location: ChromaLocationLike | None = None,
    color_range: ColorRangeLike | None = None,
    field_based: FieldBasedLike | None = None,
    strict: bool = False,
    dither_type: DitherType = AUTO,
    func: FuncExcept | None = None,
) -> (
    Callable[P, VideoNode]
    | Callable[[Callable[P, VideoNode]], Callable[P, VideoNode]]
)

Decorator implementation of initialize_clip

Source code in vstools/functions/clips.py
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
def initialize_input[**P](
    function: Callable[P, vs.VideoNode] | None = None,
    /,
    *,
    bits: int | None = 16,
    matrix: MatrixLike | None = None,
    transfer: TransferLike | None = None,
    primaries: PrimariesLike | None = None,
    chroma_location: ChromaLocationLike | None = None,
    color_range: ColorRangeLike | None = None,
    field_based: FieldBasedLike | None = None,
    strict: bool = False,
    dither_type: DitherType = DitherType.AUTO,
    func: FuncExcept | None = None,
) -> Callable[P, vs.VideoNode] | Callable[[Callable[P, vs.VideoNode]], Callable[P, vs.VideoNode]]:
    """
    Decorator implementation of [initialize_clip][vstools.initialize_clip]
    """

    if function is None:
        return partial(
            initialize_input,
            bits=bits,
            matrix=matrix,
            transfer=transfer,
            primaries=primaries,
            chroma_location=chroma_location,
            color_range=color_range,
            field_based=field_based,
            strict=strict,
            dither_type=dither_type,
            func=func,
        )

    init_args = dict[str, Any](
        bits=bits,
        matrix=matrix,
        transfer=transfer,
        primaries=primaries,
        chroma_location=chroma_location,
        color_range=color_range,
        field_based=field_based,
        strict=strict,
        dither_type=dither_type,
        func=func,
    )

    @wraps(function)
    def _wrapper(*args: P.args, **kwargs: P.kwargs) -> vs.VideoNode:
        args_l = list(args)

        for i, obj in enumerate(args_l):
            if isinstance(obj, vs.VideoNode):
                args_l[i] = initialize_clip(obj, **init_args)
                return function(*args_l, **kwargs)  # type: ignore

        kwargs2 = kwargs.copy()

        for name, obj in kwargs2.items():
            if isinstance(obj, vs.VideoNode):
                kwargs2[name] = initialize_clip(obj, **init_args)
                return function(*args, **kwargs2)  # type: ignore

        for name, param in inspect.signature(function).parameters.items():
            if param.default is not inspect.Parameter.empty and isinstance(param.default, vs.VideoNode):
                return function(*args, **kwargs2 | {name: initialize_clip(param.default, **init_args)})  # type: ignore

        raise CustomValueError(
            "No VideoNode found in positional, keyword, nor default arguments!", func or initialize_input
        )

    return _wrapper

sc_detect

sc_detect(clip: VideoNode, threshold: float = 0.1) -> VideoNode
Source code in vstools/functions/clips.py
551
552
553
554
555
556
557
558
559
560
def sc_detect(clip: vs.VideoNode, threshold: float = 0.1) -> vs.VideoNode:
    stats = vs.core.std.PlaneStats(shift_clip(clip, -1), clip)

    return vs.core.akarin.PropExpr(
        [clip, stats, stats[1:]],
        lambda: {
            "_SceneChangePrev": f"y.PlaneStatsDiff {threshold} > 1 0 ?",
            "_SceneChangeNext": f"z.PlaneStatsDiff {threshold} > 1 0 ?",
        },
    )

shift_clip

shift_clip(clip: VideoNode, offset: int) -> VideoNode

Shift a clip forwards or backwards by N frames.

This is useful for cases where you must compare every frame of a clip with the frame that comes before or after the current frame, like for example when performing temporal operations.

Both positive and negative integers are allowed. Positive values will shift a clip forward, negative will shift a clip backward.

Parameters:

  • clip

    (VideoNode) –

    Input clip.

  • offset

    (int) –

    Number of frames to offset the clip with. Negative values are allowed. Positive values will shift a clip forward, negative will shift a clip backward.

Returns:

  • VideoNode

    Clip that has been shifted forwards or backwards by N frames.

Source code in vstools/functions/clips.py
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
def shift_clip(clip: vs.VideoNode, offset: int) -> vs.VideoNode:
    """
    Shift a clip forwards or backwards by *N* frames.

    This is useful for cases where you must compare every frame of a clip
    with the frame that comes before or after the current frame,
    like for example when performing temporal operations.

    Both positive and negative integers are allowed.
    Positive values will shift a clip forward, negative will shift a clip backward.

    Args:
        clip: Input clip.
        offset: Number of frames to offset the clip with. Negative values are allowed. Positive values will shift a clip
            forward, negative will shift a clip backward.

    Returns:
        Clip that has been shifted forwards or backwards by *N* frames.
    """

    if offset > clip.num_frames - 1:
        raise FramesLengthError(shift_clip, "offset")

    if offset < 0:
        return clip[0] * abs(offset) + clip[:offset]

    if offset > 0:
        return clip[offset:] + clip[-1] * offset

    return clip

shift_clip_multi

shift_clip_multi(
    clip: VideoNode, offsets: StrictRange = (-1, 1)
) -> list[VideoNode]

Shift a clip forwards or backwards multiple times by a varying amount of frames.

This will return a clip for every shifting operation performed. This is a convenience function that makes handling multiple shifts easier.

Example:

>>> shift_clip_multi(clip, (-3, 3))
    [VideoNode, VideoNode, VideoNode, VideoNode, VideoNode, VideoNode, VideoNode]
        -3         -2         -1          0         +1         +2         +3

Parameters:

  • clip

    (VideoNode) –

    Input clip.

  • offsets

    (StrictRange, default: (-1, 1) ) –

    Tuple of offsets representing an inclusive range. A clip will be returned for every offset. Default: (-1, 1).

Returns:

  • list[VideoNode]

    A list of clips, the amount determined by the amount of offsets.

Source code in vstools/functions/clips.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
def shift_clip_multi(clip: vs.VideoNode, offsets: StrictRange = (-1, 1)) -> list[vs.VideoNode]:
    """
    Shift a clip forwards or backwards multiple times by a varying amount of frames.

    This will return a clip for every shifting operation performed.
    This is a convenience function that makes handling multiple shifts easier.

    Example:

        >>> shift_clip_multi(clip, (-3, 3))
            [VideoNode, VideoNode, VideoNode, VideoNode, VideoNode, VideoNode, VideoNode]
                -3         -2         -1          0         +1         +2         +3

    Args:
        clip: Input clip.
        offsets: Tuple of offsets representing an inclusive range.
            A clip will be returned for every offset. Default: (-1, 1).

    Returns:
        A list of clips, the amount determined by the amount of offsets.
    """
    return [shift_clip(clip, x) for x in range(offsets[0], offsets[1] + 1)]