emloop_tensorflow.models

Functions

  • cnn_encoder(): Build a convolutional neural network from the given encoder_config (sequence of block codes).
  • cnn_autoencoder(): Build a convolutional auto-encoder from the given encoder_config (sequence of layer/block codes).
  • rnn_stack(): Build a recurrent neural network stack from the given stack_config (sequence of block codes).
emloop_tensorflow.models.cnn_encoder(x, encoder_config, is_training=None, activation=<function relu>, conv_kwargs=None, bn_kwargs=None, ln_kwargs=None, skip_connections=None, use_bn=False, use_ln=False)[source]

Build a convolutional neural network from the given encoder_config (sequence of block codes).

At the moment, the following blocks are recognized:

  code example
Convolutional layer (num_filters)c[(time_kernel_size)-](kernel_size)[s(stride)] 64c3, 64c3s2, 64c3-3s2
Average/Max pooling layer (ap|mp)(kernel_size)[s(stride)] mp2, ap3s2
Inception block (num_filters)inc 128inc
Residual block (num_filters)res[s(stride)] 512ress2
Usage
images = tf.placeholder(dtype=tf.float32, shape=(None, height, width, channels), name='images')
net = cnn_encoder(images, ['64c7s2', '128inc', '128inc', 'ap3s2', '256inc', '256inc'],
                  self.is_training, tf.nn.elu)
# use encoded features here

Tip

CNN encoder can be applied to both 4D and 5D tensors.

Skip connections:

When provided, this function appends the pre-pooling tensors to the skip_connections sequence. Inversely, skip connections are popped and added to the post-unpooling tensors if they are available.

References:

Parameters:
  • x (Tensor) – 4 or 5 dim input tensor
  • encoder_config (Sequence[str]) – sequence of layer/block codes defining the CNN architecture
  • is_training (Optional[Tensor]) – training/eval phase indicator
  • activation (Callable[[Tensor], Tensor]) – activation function
  • conv_kwargs (Optional[dict]) – convolutional layers kwargs
  • bn_kwargs (Optional[dict]) – batch normalization layers kwargs
  • ln_kwargs (Optional[dict]) – layer normalization layers kwargs
  • skip_connections (Optional[List[Tensor]]) – mutable sequence of skip connection tensors around pooling operations
  • use_bn (bool) – add batch normalization layers after each convolution (including res and inception modules)
  • use_ln (bool) – add layer normalization layers after each convolution/module
Return type:

Tensor

Returns:

output tensor of the specified CNN when applied to the given input tensor

Raises:
emloop_tensorflow.models.cnn_autoencoder(x, encoder_config, is_training=None, activation=<function relu>, conv_kwargs=None, bn_kwargs=None, ln_kwargs=None, skip_connections=True, use_bn=False, use_ln=False)[source]

Build a convolutional auto-encoder from the given encoder_config (sequence of layer/block codes).

For the list of supported layers, modules etc. see cnn_encoder() function.

The process of auto-encoder construction is as follows.

  1. Create a CNN encoder based on the encoder config.
  2. Create a CNN decoder based on the reversed encoder config (reversed order, un-pooling instead of pooling)

Warning

Does not support strided layers/modules and 5 dim tensors.

Parameters:
  • x (Tensor) – 4 dim input tensor
  • encoder_config (Sequence[str]) – sequence of layer/block codes defining the CNN architecture
  • is_training (Optional[Tensor]) – training/eval phase indicator
  • activation (Callable[[Tensor], Tensor]) – activation function
  • conv_kwargs (Optional[dict]) – convolutional layers kwargs
  • bn_kwargs (Optional[dict]) – batch normalization layers kwargs
  • ln_kwargs (Optional[dict]) – layer normalization layers kwargs
  • skip_connections (bool) – include encoder-decoder skip connections around pooling operations
  • use_bn (bool) – add batch normalization layers after each convolution (including res and inception modules)
  • use_ln (bool) – add layer normalization layers after each convolution/module
Return type:

Tuple[Tensor, Tensor]

Returns:

a tuple of encoded tensor and output (decoded) tensor

Raises:
  • ValueError – if some of the layer configs cannot be correctly parsed
  • AssertionError – if the configuration does not meet the requirements
emloop_tensorflow.models.rnn_stack(x, stack_config, sequence_length=None)[source]

Build a recurrent neural network stack from the given stack_config (sequence of block codes).

At the moment, the following blocks are recognized:

  code example
Vanilla RNN [bi]RNN(num_units) biRNN64, RNN128
GRU [bi]GRU(num_units) biGRU32, GRU64
LSTM [bi]LSTM(num_units) biLSTM32, LSTM64

References:

Parameters:
  • x (Tensor) – 3-dim batch-major input tensor [batch, max_time, features]
  • stack_config (Sequence[str]) – a sequence of RNN layer codes defining the stack architecture
  • sequence_length (Optional[Tensor]) – optional tensor with sequence lengths for better performance
Return type:

Tensor

Returns:

3-dim batch-major output of the rnn stack

Raises: