orcanet.builder_util.layer_blocks

Module Contents

Classes

ConvBlock

1D/2D/3D Convolutional block followed by BatchNorm, Activation,

DenseBlock

Dense layer followed by BatchNorm, Activation and/or Dropout.

ResnetBlock

A residual building block for resnets. 2 c layers with a shortcut.

ResnetBnetBlock

A residual bottleneck building block for resnets.

InceptionBlockV2

A GoogleNet Inception block (v2).

OutputReg

Dense layer(s) for regression.

OutputRegNormal

Output block for regression using a normal distribution as output.

OutputRegNormalSplit

Output block for regression using a normal distribution as output.

OutputCateg

Dense layer(s) for categorization.

OutputRegErr

Double network for regression + error estimation.

class orcanet.builder_util.layer_blocks.ConvBlock(conv_dim, filters, kernel_size=3, strides=1, padding='same', pool_type='max_pooling', pool_size=None, pool_padding='valid', dropout=None, sdropout=None, activation='relu', kernel_l2_reg=None, batchnorm=False, kernel_initializer='he_normal', time_distributed=False, dilation_rate=1)[source]

1D/2D/3D Convolutional block followed by BatchNorm, Activation, MaxPooling and/or Dropout.

Parameters
conv_dimint

Specifies the dimension of the convolutional block, 1D/2D/3D.

filtersint

Number of filters used for the convolutional layer.

stridesint or tuple

The stride length of the convolution.

paddingstr or int or list

If str: Padding of the conv block. If int or list: Padding argument of a ZeroPaddingND layer that gets added before the convolution.

kernel_sizeint or tuple

Kernel size which is used for all three dimensions.

pool_sizeNone or int or tuple

Specifies pool size for the pooling layer, e.g. (1,1,2) -> sizes for a 3D conv block. If its None, no pooling will be added, except for when global average pooling is used.

pool_typestr, optional

The type of pooling layer to add. Ignored if pool_size is None. Can be max_pooling (default), average_pooling, or global_average_pooling.

pool_paddingstr

Padding option of the pooling layer.

dropoutfloat or None

Adds a dropout layer if the value is not None. Can not be used together with sdropout. Hint: 0 will add a dropout layer, but with a rate of 0 (=no dropout).

sdropoutfloat or None

Adds a spatial dropout layer if the value is not None. Can not be used together with dropout.

activationstr or None

Type of activation function that should be used. E.g. ‘linear’, ‘relu’, ‘elu’, ‘selu’.

kernel_l2_regfloat, optional

Regularization factor of l2 regularizer for the weights.

batchnormbool

Adds a batch normalization layer.

kernel_initializerstring

Initializer for the kernel weights.

time_distributedbool

If True, apply the TimeDistributed Wrapper around all layers.

dilation_rateint

An integer or tuple/list of a single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any strides value != 1.

class orcanet.builder_util.layer_blocks.DenseBlock(units, dropout=None, activation='relu', kernel_l2_reg=None, batchnorm=False, kernel_initializer='he_normal')[source]

Dense layer followed by BatchNorm, Activation and/or Dropout.

Parameters
unitsint

Number of neurons of the dense layer.

dropoutfloat or None

Adds a dropout layer if the value is not None.

activationstr or None

Type of activation function that should be used. E.g. ‘linear’, ‘relu’, ‘elu’, ‘selu’.

kernel_l2_regfloat, optional

Regularization factor of l2 regularizer for the weights.

batchnormbool

Adds a batch normalization layer.

class orcanet.builder_util.layer_blocks.ResnetBlock(conv_dim, filters, strides=1, kernel_size=3, activation='relu', batchnorm=False, kernel_initializer='he_normal', time_distributed=False)[source]

A residual building block for resnets. 2 c layers with a shortcut. https://arxiv.org/pdf/1605.07146.pdf

Parameters
conv_dimint

Specifies the dimension of the convolutional block, 2D/3D.

filtersint

Number of filters used for the convolutional layers.

stridesint or tuple

The stride length of the convolution. If strides is 1, this is the identity block. If not, it has a conv block at the shortcut.

kernel_sizeint or tuple

Kernel size which is used for all three dimensions.

activationstr or None

Type of activation function that should be used. E.g. ‘linear’, ‘relu’, ‘elu’, ‘selu’.

batchnormbool

Adds a batch normalization layer.

kernel_initializerstring

Initializer for the kernel weights.

time_distributedbool

If True, apply the TimeDistributed Wrapper around all layers.

class orcanet.builder_util.layer_blocks.ResnetBnetBlock(conv_dim, filters, strides=1, kernel_size=3, activation='relu', batchnorm=False, kernel_initializer='he_normal')[source]

A residual bottleneck building block for resnets. https://arxiv.org/pdf/1605.07146.pdf

Parameters
conv_dimint

Specifies the dimension of the convolutional block, 2D/3D.

filtersList

Number of filters used for the convolutional layers. Has to be length 3. First and third is for the 1x1 convolutions.

stridesint or tuple

The stride length of the convolution. If strides is 1, this is the identity block. If not, it has a conv block at the shortcut.

kernel_sizeint or tuple

Kernel size which is used for all three dimensions.

activationstr or None

Type of activation function that should be used. E.g. ‘linear’, ‘relu’, ‘elu’, ‘selu’.

batchnormbool

Adds a batch normalization layer.

kernel_initializerstring

Initializer for the kernel weights.

class orcanet.builder_util.layer_blocks.InceptionBlockV2(conv_dim, filters_1x1, filters_pool, filters_3x3, filters_3x3dbl, strides=1, activation='relu', batchnorm=False, dropout=None)[source]

A GoogleNet Inception block (v2). https://arxiv.org/pdf/1512.00567v3.pdf, see fig. 5. Keras implementation, e.g.: https://github.com/keras-team/keras-applications/blob/master/keras_applications/inception_resnet_v2.py

Parameters
conv_dimint

Specifies the dimension of the convolutional block, 1D/2D/3D.

filters_1x1int or None

No. of filters for the 1x1 convolutional branch. If None, dont make this branch.

filters_poolint or None

No. of filters for the pooling branch. If None, dont make this branch.

filters_3x3tuple or None

No. of filters for the 3x3 convolutional branch. First int is the filters in the 1x1 conv, second int for the 3x3 conv. First should be chosen smaller for computational efficiency. If None, dont make this branch.

filters_3x3dbltuple or None

No. of filters for the 3x3 convolutional branch. First int is the filters in the 1x1 conv, second int for the two 3x3 convs. First should be chosen smaller for computational efficiency. If None, dont make this branch.

stridesint or tuple

Stride length of this block. Like in the keras implementation, no 1x1 convs with stride > 1 will be used, instead they will be skipped.

class orcanet.builder_util.layer_blocks.OutputReg(output_neurons, output_name, unit_list=None, transition='keras:Flatten', **kwargs)[source]

Dense layer(s) for regression.

Parameters
output_neuronsint

Number of neurons in the last layer.

output_namestr or None

Name that will be given to the output layer of the network.

unit_listList, optional

A list of ints. Add additional Dense layers after the gpool with this many units in them. E.g., [64, 32] would add two Dense layers, the first with 64 neurons, the secound with 32 neurons.

transitionstr or None

Name of a layer that will be used as the first layer of this block. Example: ‘keras:GlobalAveragePooling2D’, ‘keras:Flatten’

kwargs

Keywords for the dense blocks that get added if unit_list is not None.

class orcanet.builder_util.layer_blocks.OutputRegNormal(output_neurons, output_name, unit_list=None, mu_activation=None, sigma_activation='softplus', transition=None, **kwargs)[source]

Output block for regression using a normal distribution as output.

The output tensor will have shape (?, 2, output_neurons), with [:, 0] being the mu and [:, 1] being the sigma.

Parameters
mu_activationstr, optional

Activation function for the mu neurons.

sigma_activationstr, optional

Activation function for the sigma neurons.

See OutputReg for other parameters.
class orcanet.builder_util.layer_blocks.OutputRegNormalSplit(*args, sigma_unit_list=None, **kwargs)[source]

Output block for regression using a normal distribution as output.

The sigma will be produced by its own tower of dense layers that is seperated from the rest of the network via gradient stop.

The output is a list with two tensors: - The first is the mu with shape (?, output_neurons) and name output_name. - The second is mu + sigma with shape (?, 2, output_neurons),

with [:, 0] being the mu and [:, 1] being the sigma. Its name is output_name + ‘_err’.

Parameters
sigma_unit_listList, optional

A list of ints. Neurons in the Dense layers for the tower that outputs the sigma. E.g., [64, 32] would add two Dense layers, the first with 64 neurons, the second with 32 neurons. Default: Same as unit_list.

See OutputRegNormal for other parameters.
class orcanet.builder_util.layer_blocks.OutputCateg(categories, output_name, unit_list=None, transition='keras:Flatten', **kwargs)[source]

Dense layer(s) for categorization.

Parameters
categoriesint

Number of categories (= neurons in the last layer).

output_namestr

Name that will be given to the output layer of the network.

unit_listList, optional

A list of ints. Add additional Dense layers after the gpool with this many units in them. E.g., [64, 32] would add two Dense layers, the first with 64 neurons, the secound with 32 neurons.

transitionstr or None

Name of a layer that will be used as the first layer of this block. Example: ‘keras:GlobalAveragePooling2D’, ‘keras:Flatten’

kwargs

Keywords for the dense blocks that get added if unit_list is not None.

class orcanet.builder_util.layer_blocks.OutputRegErr(output_names, flatten=True, **kwargs)[source]

Double network for regression + error estimation.

It has 3 dense layer blocks, followed by one dense layer for each output_name, as well as dense layer blocks, followed by one dense layer for the respective error of each output_name.

Parameters
output_namesList

List of strs, the output names, each with one neuron + one err neuron.

flattenbool

If True, start with a flatten layer.

kwargs

Keywords for the dense blocks.