Interface to 'TensorFlow SIG Addons'
Keras-based multi head attention layer
Neural Architecture Search (NAS) recurrent network cell.
Npairs multilabel loss
Pinball loss
Weighted cross-entropy loss for a sequence of logits.
Sigmoid focal crossentropy loss
Sparsemax loss
Triplet hard loss
Triplet semihard loss
Gelu
Hardshrink
Lisht
Mish
Rrelu
Softshrink
Sparsemax
Tanhshrink
Bahdanau Attention
Bahdanau Monotonic Attention
Implements Luong-style (multiplicative) attention scoring.
Monotonic attention mechanism with Luong-style energy function.
Monotonic attention
Attention Wrapper
Attention Wrapper State
Average Model Checkpoint
Time Stopping
TQDM Progress Bar
CRF binary score
CRF decode
CRF decode backward
CRF decode forward
CRF forward
CRF log likelihood
CRF log norm
CRF multitag sequence score
CRF sequence score
CRF unary score
Dynamic decode
An RNN Decoder abstract interface object.
Base Decoder
Basic Decoder
Basic decoder output
BeamSearch sampling decoder
Beam Search Decoder Output
Beam Search Decoder State
Final Beam Search Decoder Output
Factory function returning an optimizer class with decoupled weight de...
Gather tree
Gather tree from array
Hardmax
Adjust hsv in yiq
Angles to projective transforms
Blend
Compose transforms
Connected components
Cutout
Dense image warp
Equalize
Euclidean dist transform
Flat transforms to matrices
From 4D image
Get ndims
Interpolate bilinear
Interpolate spline
Matrices to flat transforms
Mean filter2d
Median filter2d
Random cutout
Random hsv in yiq
Resampler
Rotate
Sharpness
Shear x-axis
Shear y-axis
Sparse image warp
To 4D image
Transform
Translate
Translate xy dims
Translations to projective transforms
Uwrap
Wrap
Install TensorFlow SIG Addons
Gaussian Error Linear Unit
Correlation Cost Layer.
FilterResponseNormalization
Group normalization layer
Instance normalization layer
Maxout layer
LSTM cell with layer normalization and recurrent dropout.
Project into the Poincare ball with norm <= 1.0 - epsilon
Sparsemax activation function
Weight Normalization layer
Lookahead mechanism
Contrastive loss
Implements the GIoU loss function.
Hamming loss
Lifted structured loss
Npairs loss
Computes Kappa score between two raters
FBetaScore
Hamming distance
MatthewsCorrelationCoefficient
MultiLabelConfusionMatrix
RSquareThis is also called as coefficient of determination. It tells h...
F1Score
Conditional Gradient
Optimizer that implements the Adam algorithm with weight decay
Optimizer that implements the Momentum algorithm with weight_decay
Layer-wise Adaptive Moments
Lazy Adam
Moving Average
NovoGrad
Rectified Adam (a.k.a. RAdam)
Stochastic Weight Averaging
Yogi
Parse time
Objects exported from other packages
Register all
Register custom kernels
Register keras objects
Safe cumprod
Bernoulli sample
Categorical sample
Sampler
Base abstract class that allows the user to customize sampling.
Greedy Embedding Sampler
Inference Sampler
Sample Embedding Sampler
A training sampler that adds scheduled sampling
Scheduled Output Training Sampler
A Sampler for use during training.
Skip gram sample
Skip gram sample with text vocab
Version of TensorFlow SIG Addons
Tile batch
Viterbi decode
'TensorFlow SIG Addons' <https://www.tensorflow.org/addons> is a repository of community contributions that conform to well-established API patterns, but implement new functionality not available in core 'TensorFlow'. 'TensorFlow' natively supports a large number of operators, layers, metrics, losses, optimizers, and more. However, in a fast moving field like Machine Learning, there are many interesting new developments that cannot be integrated into core 'TensorFlow' (because their broad applicability is not yet clear, or it is mostly used by a smaller subset of the community).