# opennmt.decoders.rnn_decoder module¶

Define RNN-based decoders.

class opennmt.decoders.rnn_decoder.RNNDecoder(num_layers, num_units, bridge=None, cell_class=None, dropout=0.3, residual_connections=False)[source]

A basic RNN decoder.

__init__(num_layers, num_units, bridge=None, cell_class=None, dropout=0.3, residual_connections=False)[source]

Initializes the decoder parameters.

Parameters: num_layers – The number of layers. num_units – The number of units in each layer. bridge – A opennmt.layers.bridge.Bridge to pass the encoder state to the decoder. cell_class – The inner cell class or a callable taking num_units as argument and returning a cell. Defaults to a LSTM cell. dropout – The probability to drop units in each layer output. residual_connections – If True, each layer input will be added to its output.
output_size

Returns the decoder output size.

decode(inputs, sequence_length, vocab_size=None, initial_state=None, sampling_probability=None, embedding=None, output_layer=None, mode='train', memory=None, memory_sequence_length=None, return_alignment_history=False)[source]

Decodes a full input sequence.

Usually used for training and evaluation where target sequences are known.

Parameters: inputs – The input to decode of shape $$[B, T, ...]$$. sequence_length – The length of each input with shape $$[B]$$. vocab_size – The output vocabulary size. Must be set if output_layer is not set. initial_state – The initial state as a (possibly nested tuple of…) tensors. sampling_probability – The probability of sampling categorically from the output ids instead of reading directly from the inputs. embedding – The embedding tensor or a callable that takes word ids. Must be set when sampling_probability is set. output_layer – Optional layer to apply to the output prior sampling. Must be set if vocab_size is not set. mode – A tf.estimator.ModeKeys mode. memory – (optional) Memory values to query. memory_sequence_length – (optional) Memory values length. return_alignment_history – If True, also returns the alignment history from the attention layer (None will be returned if unsupported by the decoder). A tuple (outputs, state, sequence_length) or (outputs, state, sequence_length, alignment_history) if return_alignment_history is True.
step_fn(mode, batch_size, initial_state=None, memory=None, memory_sequence_length=None, dtype=tf.float32)[source]

Callable to run decoding steps.

Parameters: mode – A tf.estimator.ModeKeys mode. batch_size – The batch size. initial_state – The initial state to start from as a (possibly nested tuple of…) tensors. memory – (optional) Memory values to query. memory_sequence_length – (optional) Memory values length. dtype – The data type. A callable with the signature (step, inputs, state, mode) -> (outputs, state) or (outputs, state, attention) if self.support_alignment_history.
class opennmt.decoders.rnn_decoder.AttentionalRNNDecoder(num_layers, num_units, bridge=None, attention_mechanism_class=None, output_is_attention=True, cell_class=None, dropout=0.3, residual_connections=False)[source]

A RNN decoder with attention.

It simple overrides the cell construction to add an attention wrapper.

__init__(num_layers, num_units, bridge=None, attention_mechanism_class=None, output_is_attention=True, cell_class=None, dropout=0.3, residual_connections=False)[source]

Initializes the decoder parameters.

Parameters: num_layers – The number of layers. num_units – The number of units in each layer. bridge – A opennmt.layers.bridge.Bridge to pass the encoder state to the decoder. attention_mechanism_class – A class inheriting from tf.contrib.seq2seq.AttentionMechanism or a callable that takes (num_units, memory, memory_sequence_length) as arguments and returns a tf.contrib.seq2seq.AttentionMechanism. Defaults to tf.contrib.seq2seq.LuongAttention. output_is_attention – If True, the final decoder output (before logits) is the output of the attention layer. In all cases, the output of the attention layer is passed to the next step. cell_class – The inner cell class or a callable taking num_units as argument and returning a cell. dropout – The probability to drop units in each layer output. residual_connections – If True, each layer input will be added to its output.
support_alignment_history

Returns True if this decoder can return the attention as alignment history.

class opennmt.decoders.rnn_decoder.MultiAttentionalRNNDecoder(num_layers, num_units, attention_layers=None, attention_mechanism_class=None, cell_class=None, dropout=0.3, residual_connections=False)[source]

A RNN decoder with multi-attention.

This decoder can attend the encoder outputs after multiple RNN layers using one or multiple attention mechanisms. Additionally, the cell state of this decoder is not initialized from the encoder state (i.e. a opennmt.layers.bridge.ZeroBridge is imposed).

__init__(num_layers, num_units, attention_layers=None, attention_mechanism_class=None, cell_class=None, dropout=0.3, residual_connections=False)[source]

Initializes the decoder parameters.

Parameters: num_layers – The number of layers. num_units – The number of units in each layer. attention_layers – A list of integers, the layers after which to add attention. If None, attention will only be added after the last layer. attention_mechanism_class – A class or list of classes inheriting from tf.contrib.seq2seq.AttentionMechanism. Alternatively, the class can be replaced by a callable that takes (num_units, memory, memory_sequence_length) as arguments and returns a tf.contrib.seq2seq.AttentionMechanism. Defaults to tf.contrib.seq2seq.LuongAttention. cell_class – The inner cell class or a callable taking num_units as argument and returning a cell. dropout – The probability to drop units in each layer output. residual_connections – If True, each layer input will be added to its output.
class opennmt.decoders.rnn_decoder.RNMTPlusDecoder(num_layers, num_units, num_heads, cell_class=None, dropout=0.3)[source]

The RNMT+ decoder described in https://arxiv.org/abs/1804.09849.

__init__(num_layers, num_units, num_heads, cell_class=None, dropout=0.3)[source]

Initializes the decoder parameters.

Parameters: num_layers – The number of layers. num_units – The number of units in each layer. num_heads – The number of attention heads. cell_class – The inner cell class or a callable taking num_units as argument and returning a cell. Defaults to a layer normalized LSTM cell. dropout – The probability to drop units from the decoder input and in each layer output.
output_size

Returns the decoder output size.