:py:mod:`orcanet.builder_util.layer_blocks` =========================================== .. py:module:: orcanet.builder_util.layer_blocks Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: orcanet.builder_util.layer_blocks.ConvBlock orcanet.builder_util.layer_blocks.DenseBlock orcanet.builder_util.layer_blocks.ResnetBlock orcanet.builder_util.layer_blocks.ResnetBnetBlock orcanet.builder_util.layer_blocks.InceptionBlockV2 orcanet.builder_util.layer_blocks.OutputReg orcanet.builder_util.layer_blocks.OutputRegNormal orcanet.builder_util.layer_blocks.OutputRegNormalSplit orcanet.builder_util.layer_blocks.OutputCateg orcanet.builder_util.layer_blocks.OutputRegErr .. py:class:: ConvBlock(conv_dim, filters, kernel_size=3, strides=1, padding='same', pool_type='max_pooling', pool_size=None, pool_padding='valid', dropout=None, sdropout=None, activation='relu', kernel_l2_reg=None, batchnorm=False, kernel_initializer='he_normal', time_distributed=False, dilation_rate=1) 1D/2D/3D Convolutional block followed by BatchNorm, Activation, MaxPooling and/or Dropout. :Parameters: **conv_dim** : int Specifies the dimension of the convolutional block, 1D/2D/3D. **filters** : int Number of filters used for the convolutional layer. **strides** : int or tuple The stride length of the convolution. **padding** : str or int or list If str: Padding of the conv block. If int or list: Padding argument of a ZeroPaddingND layer that gets added before the convolution. **kernel_size** : int or tuple Kernel size which is used for all three dimensions. **pool_size** : None or int or tuple Specifies pool size for the pooling layer, e.g. (1,1,2) -> sizes for a 3D conv block. If its None, no pooling will be added, except for when global average pooling is used. **pool_type** : str, optional The type of pooling layer to add. Ignored if pool_size is None. Can be max_pooling (default), average_pooling, or global_average_pooling. **pool_padding** : str Padding option of the pooling layer. **dropout** : float or None Adds a dropout layer if the value is not None. Can not be used together with sdropout. Hint: 0 will add a dropout layer, but with a rate of 0 (=no dropout). **sdropout** : float or None Adds a spatial dropout layer if the value is not None. Can not be used together with dropout. **activation** : str or None Type of activation function that should be used. E.g. 'linear', 'relu', 'elu', 'selu'. **kernel_l2_reg** : float, optional Regularization factor of l2 regularizer for the weights. **batchnorm** : bool Adds a batch normalization layer. **kernel_initializer** : string Initializer for the kernel weights. **time_distributed** : bool If True, apply the TimeDistributed Wrapper around all layers. **dilation_rate** : int An integer or tuple/list of a single integer, specifying the dilation rate to use for dilated convolution. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any strides value != 1. .. !! processed by numpydoc !! .. py:class:: DenseBlock(units, dropout=None, activation='relu', kernel_l2_reg=None, batchnorm=False, kernel_initializer='he_normal') Dense layer followed by BatchNorm, Activation and/or Dropout. :Parameters: **units** : int Number of neurons of the dense layer. **dropout** : float or None Adds a dropout layer if the value is not None. **activation** : str or None Type of activation function that should be used. E.g. 'linear', 'relu', 'elu', 'selu'. **kernel_l2_reg** : float, optional Regularization factor of l2 regularizer for the weights. **batchnorm** : bool Adds a batch normalization layer. .. !! processed by numpydoc !! .. py:class:: ResnetBlock(conv_dim, filters, strides=1, kernel_size=3, activation='relu', batchnorm=False, kernel_initializer='he_normal', time_distributed=False) A residual building block for resnets. 2 c layers with a shortcut. https://arxiv.org/pdf/1605.07146.pdf :Parameters: **conv_dim** : int Specifies the dimension of the convolutional block, 2D/3D. **filters** : int Number of filters used for the convolutional layers. **strides** : int or tuple The stride length of the convolution. If strides is 1, this is the identity block. If not, it has a conv block at the shortcut. **kernel_size** : int or tuple Kernel size which is used for all three dimensions. **activation** : str or None Type of activation function that should be used. E.g. 'linear', 'relu', 'elu', 'selu'. **batchnorm** : bool Adds a batch normalization layer. **kernel_initializer** : string Initializer for the kernel weights. **time_distributed** : bool If True, apply the TimeDistributed Wrapper around all layers. .. !! processed by numpydoc !! .. py:class:: ResnetBnetBlock(conv_dim, filters, strides=1, kernel_size=3, activation='relu', batchnorm=False, kernel_initializer='he_normal') A residual bottleneck building block for resnets. https://arxiv.org/pdf/1605.07146.pdf :Parameters: **conv_dim** : int Specifies the dimension of the convolutional block, 2D/3D. **filters** : List Number of filters used for the convolutional layers. Has to be length 3. First and third is for the 1x1 convolutions. **strides** : int or tuple The stride length of the convolution. If strides is 1, this is the identity block. If not, it has a conv block at the shortcut. **kernel_size** : int or tuple Kernel size which is used for all three dimensions. **activation** : str or None Type of activation function that should be used. E.g. 'linear', 'relu', 'elu', 'selu'. **batchnorm** : bool Adds a batch normalization layer. **kernel_initializer** : string Initializer for the kernel weights. .. !! processed by numpydoc !! .. py:class:: InceptionBlockV2(conv_dim, filters_1x1, filters_pool, filters_3x3, filters_3x3dbl, strides=1, activation='relu', batchnorm=False, dropout=None) A GoogleNet Inception block (v2). https://arxiv.org/pdf/1512.00567v3.pdf, see fig. 5. Keras implementation, e.g.: https://github.com/keras-team/keras-applications/blob/master/keras_applications/inception_resnet_v2.py :Parameters: **conv_dim** : int Specifies the dimension of the convolutional block, 1D/2D/3D. **filters_1x1** : int or None No. of filters for the 1x1 convolutional branch. If None, dont make this branch. **filters_pool** : int or None No. of filters for the pooling branch. If None, dont make this branch. **filters_3x3** : tuple or None No. of filters for the 3x3 convolutional branch. First int is the filters in the 1x1 conv, second int for the 3x3 conv. First should be chosen smaller for computational efficiency. If None, dont make this branch. **filters_3x3dbl** : tuple or None No. of filters for the 3x3 convolutional branch. First int is the filters in the 1x1 conv, second int for the two 3x3 convs. First should be chosen smaller for computational efficiency. If None, dont make this branch. **strides** : int or tuple Stride length of this block. Like in the keras implementation, no 1x1 convs with stride > 1 will be used, instead they will be skipped. .. !! processed by numpydoc !! .. py:class:: OutputReg(output_neurons, output_name, unit_list=None, transition='keras:Flatten', **kwargs) Dense layer(s) for regression. :Parameters: **output_neurons** : int Number of neurons in the last layer. **output_name** : str or None Name that will be given to the output layer of the network. **unit_list** : List, optional A list of ints. Add additional Dense layers after the gpool with this many units in them. E.g., [64, 32] would add two Dense layers, the first with 64 neurons, the secound with 32 neurons. **transition** : str or None Name of a layer that will be used as the first layer of this block. Example: 'keras:GlobalAveragePooling2D', 'keras:Flatten' **kwargs** Keywords for the dense blocks that get added if unit_list is not None. .. !! processed by numpydoc !! .. py:class:: OutputRegNormal(output_neurons, output_name, unit_list=None, mu_activation=None, sigma_activation='softplus', transition=None, **kwargs) Output block for regression using a normal distribution as output. The output tensor will have shape (?, 2, output_neurons), with [:, 0] being the mu and [:, 1] being the sigma. :Parameters: **mu_activation** : str, optional Activation function for the mu neurons. **sigma_activation** : str, optional Activation function for the sigma neurons. **See OutputReg for other parameters.** .. .. !! processed by numpydoc !! .. py:class:: OutputRegNormalSplit(*args, sigma_unit_list=None, **kwargs) Output block for regression using a normal distribution as output. The sigma will be produced by its own tower of dense layers that is seperated from the rest of the network via gradient stop. The output is a list with two tensors: - The first is the mu with shape (?, output_neurons) and name output_name. - The second is mu + sigma with shape (?, 2, output_neurons), with [:, 0] being the mu and [:, 1] being the sigma. Its name is output_name + '_err'. :Parameters: **sigma_unit_list** : List, optional A list of ints. Neurons in the Dense layers for the tower that outputs the sigma. E.g., [64, 32] would add two Dense layers, the first with 64 neurons, the second with 32 neurons. Default: Same as unit_list. **See OutputRegNormal for other parameters.** .. .. !! processed by numpydoc !! .. py:class:: OutputCateg(categories, output_name, unit_list=None, transition='keras:Flatten', **kwargs) Dense layer(s) for categorization. :Parameters: **categories** : int Number of categories (= neurons in the last layer). **output_name** : str Name that will be given to the output layer of the network. **unit_list** : List, optional A list of ints. Add additional Dense layers after the gpool with this many units in them. E.g., [64, 32] would add two Dense layers, the first with 64 neurons, the secound with 32 neurons. **transition** : str or None Name of a layer that will be used as the first layer of this block. Example: 'keras:GlobalAveragePooling2D', 'keras:Flatten' **kwargs** Keywords for the dense blocks that get added if unit_list is not None. .. !! processed by numpydoc !! .. py:class:: OutputRegErr(output_names, flatten=True, **kwargs) Double network for regression + error estimation. It has 3 dense layer blocks, followed by one dense layer for each output_name, as well as dense layer blocks, followed by one dense layer for the respective error of each output_name. :Parameters: **output_names** : List List of strs, the output names, each with one neuron + one err neuron. **flatten** : bool If True, start with a flatten layer. **kwargs** Keywords for the dense blocks. .. !! processed by numpydoc !!