NetworkCNNLayer

class NetworkCNNLayer.CNN(n_features=1, filter=1, d_row=-1, border_mode='valid', conv_stride=(1, 1), pool_size=(1, 1), filter_dilation=(1, 1), ignore_border=1, pool_stride=0, pool_padding=(0, 0), mode='max', activation='tanh', dropout=0.0, factor=1.0, base=None, transpose=False, force_sample=False, **kwargs)[source]
Parameters:
  • n_features (int) – integer the number of feature map(s), e.g. 32, 64, or so on. the input will be interpret as (width|time, batch, height * n_in_features) and the output will be (width|time, batch, height * n_features).
  • filter (int|(int,int)) – integer or tuple of length 2 the filter size/shape, i.e. the number of row(s) and/or columns(s) from the filter shape. when this filter type is integer, it means the number of rows the same as the number of columns. e.g. 3, 5, (1,3), or so on.
  • d_row (int) – integer the number of row(s) from the input the default value is -1, which the dimension comes from the n_out of the input. otherwise, this has to be filled only for the first convolutional layer and the rest layer will use the number of rows from the previous layer.
  • border_mode (str) –

    string “valid” – only apply filter to complete patches of the image.

    Generates output of shape: (image_shape - filter_shape + 1).

    “full” – zero-pads image to multiple of filter shape to generate output of shape: (image_shape + filter_shape - 1). “same” – keep the dimension of convolutional layer output the same as the input dimension.

  • conv_stride ((int,int)) – tuple of length 2 factor by which to subsample the convolutional layer output. this stride is writen in (rows,columns).
  • pool_size ((int,int)) – tuple of length 2 factor by which to downscale in pooling layer. this is written in (rows,columns). the default value is (2,2), it will halve the input in each dimension.
  • filter_dilation ((int,int)) – tuple of length 2 factor by which to subsample (stride) the convolutional layer input.
  • ignore_border (int|bool) – integer or boolean 1 or True – (5, 5) input with pool_size = (2, 2), will generate a (2, 2) pooling layer output. 0 or False – (5, 5) input with pool_size = (2, 2), will generate a (3, 3) pooling layer output.
  • pool_stride ((int,int)) – tuple of length 2 stride size, which is the number of shifts over rows/cols to get the next pool region. the default value is 0, it will set equal to pool_size, which means no overlap on pooling regions.
  • pool_padding ((int,int)) – tuple of length 2 pad zeros to extend beyond four borders of the images. this is writen in (pad_h,pad_w), where pad_h is the size of the top and bottom margins, and pad_w is the size of the left and right margins.
  • mode (str) – string pooling layer mode that excludes the padding in the computation. “max” – max pooling “sum” – sum pooling “avg” – average pooling “fmp” – fractional max pooling
  • activation (str) – string activation function, e.g. “tanh”, “sigmoid”, “relu”, “elu”, “maxout”, and so on.
  • factor (float) – float factor by which scale the initial weights
recurrent = True[source]
get_status(sources)[source]
get_dim(input, filters, pools, border_mode, stride, pool_stride, ignore_border, pad)[source]
calculate_index(inputs)[source]
calculate_dropout(dropout, inputs)[source]
convolution(inputs, filter_shape, stride, border_mode, factor, pool_size, filter_dilation)[source]
pooling(inputs, pool_size, ignore_border, stride, pad, mode)[source]
bias_term(inputs, n_features, activation)[source]
run_cnn(inputs, filter_shape, filter_dilation, params, modes, others)[source]
class NetworkCNNLayer.NewConv(**kwargs)[source]
layer_class = 'conv'[source]

this class is for standard CNN and inception

class NetworkCNNLayer.ConcatConv(padding=False, **kwargs)[source]
layer_class = 'conv_1d'[source]

this class is for the CNN that processes an entire line image as the input by concatenated several frames by time axis.

class NetworkCNNLayer.ResNet(**kwargs)[source]
layer_class = 'resnet'[source]

this class is for resnet connection.