API Reference
- class tems.ConvPass(dims, in_channels=1, out_channels=1, kernel_sizes=(3, 3), activation=<class 'torch.nn.modules.activation.ReLU'>, padding='valid', residual=False)
The ConvPass class wraps a series of convolutional layers with a limited set of arguments. It is designed to be used with the UNet class and provides satisfies the ContextAwareModule interface.
- Parameters:
dims (
int) – the number of dimensions (1, 2, or 3)in_channels (
int) – the number of input channelsout_channels (
int) – the number of output channelskernel_sizes (
Sequence[Union[Sequence[int],int]]) – a sequence of kernel sizes for each convolutional layeractivation (
type[Module]) – the activation function to use after each convolutional layerpadding (
str) – the padding mode to use for the convolutional layers
- property context: Tensor
The context of the ConvPass module. Set to zero if the padding is “same”. Set to the sum of the [kernel_size - 1 for kernel_size in kernel_sizes] if the padding is “valid”.
- property dims: int
The number of dimensions of the ConvPass module.
- property equivariant_step: Tensor
Always 1 for ConvPass since we don’t yet support strided convolutions.
- forward(x)
Apply a series of convolutions to the input tensor.
- Parameters:
x (
Tensor) – the input tensor- Return type:
Tensor
- property min_input_shape: Tensor
Simply 1 + self.context.
- property min_output_shape: Tensor
Always 1
- class tems.Downsample(dims, downsample_factor)
The Downsample class wraps a MaxPool layer with a limited set of arguments. It is designed to be used with the UNet class and provides satisfies the ContextAwareModule interface.
- Parameters:
dims (
int) – the number of dimensions (1, 2, or 3)downsample_factor (
Union[Sequence[int],int]) – the downsample factor for each dimension
- property context: Tensor
The context is always 0 for downsampling.
- property dims: int
The number of dimensions (1, 2, or 3).
- property equivariant_step: Tensor
The invariant step is the downsample factor for each dimension.
- forward(x)
Apply the downsample operation to the input tensor.
- Parameters:
x (
Tensor) – the input tensor- Return type:
Tensor
- property min_input_shape: Tensor
min_input_shape is equal to the downsample factor for each dimension.
- property min_output_shape: Tensor
min_output_shape is just 1 for each dimension.
- class tems.UModule(in_conv_pass, downsample, lower_block, upsample, out_conv_pass, _equivariance_context=None)
The UModule class is an abstraction of a single layer of a UNet. It involves an input conv pass, a downsample, a lower block, an upsample, and an output conv pass. It is designed to be used with the UNet class and satisfies the ContextAwareModule interface.
- Parameters:
in_conv_pass (
ConvPass) – the input conv passdownsample (
Downsample) – the downsample layerlower_block (
ContextAwareModule) – the lower blockupsample (
Upsample) – the upsample layerout_conv_pass (
ConvPass) – the outputresiduals – whether or not add residual layers around conv passes
- property context: Tensor
in_conv_pass.context + downsample.equivariant_step * lower_block.context + out_conv_pass.context + (Optional equivariance_context)
The equivariance context is only added during evaluation and is used to make sure the network is translation equivariant for easy blockwise processing without tiling artifacts.
- crop(x, shape)
Center-crop x to match spatial dimensions given by shape.
- Parameters:
x (
Tensor) – the input tensorshape (
Tensor) – the target shape
- Return type:
Tensor
- property dims: int
The number of dimensions (1, 2, or 3).
- property equivariant_step: Tensor
The invariant step is the product of the downsample factors.
- forward(x)
Apply the UModule to the input tensor.
- Parameters:
x (
Tensor) – the input tensor- Return type:
Tensor
- property min_input_shape: Tensor
The minimum input shape that this module can accept.
- property min_output_shape: Tensor
The minimum output shape that this module can produce. min_input_shape - context
- set_equivariance_context(equivariance_context)
Set the equivariance context to be used during evaluation.
- Parameters:
equivariance_context (
Tensor) – the equivariance context per dimension
- class tems.UNet(dims, bottleneck, levels, residuals=False)
A UNet implementation with helper functions to calculate the minimum input and output shapes of the network, along with the context and appropriate step size for translation equivariance.
This UNet is also fully scriptable with torch.jit.script making it easy to share and deploy.
- Parameters:
dims (
int) – the number of dimensionsbottleneck (
ContextAwareModule) – the bottleneck modulelevels (
Sequence[tuple[ConvPass,Downsample,Upsample,ConvPass]]) – a sequence of tuples containing the left conv pass, downsample, upsample, and right conv pass for each levelresiduals (
bool) – whether or not to add residual connections around each conv block.
- property context: Tensor
The context of the UNet.
- property dims: int
The number of dimensions (1, 2, or 3).
- property equivariant_step: Tensor
The invariant step is the product of all downsampling factors.
- forward(x)
Apply the UNet to the input tensor.
- Parameters:
x (
Tensor) – the input tensor- Return type:
Tensor
- classmethod funlib_api(dims, in_channels, num_fmaps, fmap_inc_factor, downsample_factors, kernel_size_down=None, kernel_size_up=None, activation='ReLU', num_fmaps_out=None, num_heads=1, constant_upsample=True, padding='valid', residuals=False)
A helper method to match the API of the funlib UNet as closely as possible. This method is helpful to creating a UNet with a more compact API whereas the default constructor makes you define every layer explicitly.
- Parameters:
dims (
int) – the number of dimensionsin_channels (
int) – the number of input channelsnum_fmaps (
int) – the number of feature mapsfmap_inc_factor (
int) – the factor by which to increase the number of feature mapsdownsample_factors (
Sequence[Union[Sequence[int],int]]) – the downsample factors for each levelkernel_size_down (
Optional[Sequence[Sequence[Union[Sequence[int],int]]]]) – the kernel size for the downsample convolutionskernel_size_up (
Optional[Sequence[Sequence[Union[Sequence[int],int]]]]) – the kernel size for the upsample convolutionsactivation (
str) – the activation function to usenum_fmaps_out (
int|None) – the number of output feature mapsnum_heads (
int) – the number of heads to useconstant_upsample (
bool) – whether to use constant upsamplingpadding (
str) – the padding mode to use. Supported values are “valid” and “same”.residuals (
bool) – whether to use residual connections
- property min_input_shape: Tensor
The minimum input shape that this module can accept.
- property min_output_shape: Tensor
The minimum output shape that this module can produce.
- class tems.Upsample(dims, scale_factor, mode='nearest')
The Upsample class wraps a Upsample layer with a limited set of arguments. It is designed to be used with the UNet class and satisfies the ContextAwareModule interface.
- Parameters:
dims (
int) – the number of dimensionsscale_factor (
Union[Sequence[int],int]) – the upsample factormode (
str) – the upsample mode (nearest, bilinear, etc.)
- property context: Tensor
context is always 0 for upsampling.
- property dims: int
The number of dimensions.
- property equivariant_step: Tensor
The invariant step is the inverse of the upsample factor for each dimension. A upsample factor of 4 means each pixel shift in the input shifts the output by 4 pixels.
- forward(x)
Apply the upsample operation to the input tensor.
- Parameters:
x (
Tensor) – the input tensor- Return type:
Tensor
- property min_input_shape: Tensor
min_input_shape is equal to 1 for each dimension.
- property min_output_shape: Tensor
min_output_shape is equal to the upsample factor for each dimension.