TFNativeOp

class TFNativeOp.OpDescription(in_info, out_info, c_fw_code, c_bw_code=None, c_extra_support_code=None, code_version=None, cpu_support=True, grad_input_map=None, name=None)[source]
Parameters:
  • in_info (list[dict(str)]) –

    each dict describes one input var. attribs in the dict:

    int ndim: the ndim. tuple shape: tuple and can contain None for specific dimensions.
    optional attribs:
    str dtype: “float32” by default. bool need_contiguous: false by default. int want_inplace: -1 by default. try to optimize to destroy input, on output-index.
    “dummy_out” is a special value which will add another output.

    bool is_inplace: false by default. whether the optimization was applied. str gradient: can be “disconnected”. see grad(). bool bw_input: True by default. add this param to the bw input.

    other attribs are just ignored.

  • out_info (list[dict(str)]) –

    like in_info. slightly different behavior for:

    shape: we also allow refs to the in_info in the form (in-idx,dim). see infer_shape(). need_contiguous/want_inplace: used for bw, in case for bw_input == True.
  • c_fw_code (str) – C code for forward pass
  • c_extra_support_code (str|dict[str]) – C support code (for c_support_code)
  • c_bw_code (str|None) – C code for backward pass (for gradient)
  • code_version (tuple[int]) – will be returned by c_code_cache_version.
  • cpu_support (bool) –
  • grad_input_map (tuple[int]|callable) – selection of grad inputs. by default, we get all inputs + all outputs + all grad outputs.
  • name (str) – name
classmethod from_gen_base(gen_base)[source]
Parameters:gen_base (NativeOp.NativeOpGenBase) –
Return type:OpDescription
is_grad_defined[source]
grad()[source]
Return type:OpDescription|None
class TFNativeOp.OpMaker(description, compiler_opts=None)[source]

https://www.tensorflow.org/versions/master/how_tos/adding_an_op/

Parameters:
  • description (OpDescription) –
  • compiler_opts (dict[str]|None) – passed on to OpCodeCompiler as kwargs
with_cuda = None[source]
mod_cache = {}[source]
op_cache = {}[source]
op_name[source]
cache_key[source]
support_native_op_cpp_filename[source]
make_op()[source]
TFNativeOp.make_lstm_op(**kwargs)[source]

See NativeLstmCell for usage.

Returns:op
Return type:(tf.Tensor) -> tuple[tf.Tensor]
class TFNativeOp.RecSeqCellOp(n_hidden)[source]
class TFNativeOp.NativeLstmCell(n_hidden)[source]
classmethod map_layer_inputs_to_op(Z, V_h, i, initial_state=None)[source]

Just like NativeOp.LstmGenericBase.map_layer_inputs_to_op(). :param tf.Tensor Z: inputs: shape (time,batch,n_hidden*4) :param tf.Tensor V_h: W_re: shape (n_hidden,n_hidden*4) :param tf.Tensor i: index: shape (time,batch) :param tf.Tensor|None initial_state: shape (batch,n_hidden) :rtype: (tf.Tensor,tf.Tensor,tf.Tensor,tf.Tensor)

TFNativeOp.make_fast_baum_welch_op(**kwargs)[source]
Returns:op
Return type:(tf.Tensor) -> tuple[tf.Tensor]
TFNativeOp.fast_baum_welch(am_scores, edges, weights, start_end_states, float_idx, state_buffer=None)[source]
Parameters:
  • am_scores (tf.Tensor) – (time, batch, dim), in -log space
  • edges (tf.Tensor) – (4,num_edges), edges of the graph (from,to,emission_idx,sequence_idx)
  • weights (tf.Tensor) – (num_edges,), weights of the edges
  • start_end_states (tf.Tensor) – (2, batch), (start,end) state idx in automaton. there is only one single automaton.
  • float_idx (tf.Tensor) – (time, batch) -> 0 or 1 (index mask, via seq lens)
  • state_buffer (tf.Tensor) – (2, num_states)
Returns:

(fwdbwd, obs_scores), fwdbwd is (time, batch, dim), obs_scores is (time, batch), in -log space

Return type:

(tf.Tensor, tf.Tensor)

TFNativeOp.fast_baum_welch_by_sprint_automata(am_scores, float_idx, tags, sprint_opts)[source]
Parameters:
  • am_scores (tf.Tensor) – (time, batch, dim), in -log space
  • float_idx (tf.Tensor) – (time, batch) -> 0 or 1 (index mask, via seq lens)
  • tags (tf.Tensor) – (batch,) -> seq name (str)
  • sprint_opts (dict[str]) –
Returns:

(fwdbwd, obs_scores), fwdbwd is (time, batch, dim), obs_scores is (time, batch), in -log space

Return type:

(tf.Tensor, tf.Tensor)

TFNativeOp.demo()[source]