Tensors and Neural Networks with 'GPU' Acceleration
Applies a 3D adaptive max pooling over an input signal composed of sev...
Xavier uniform initialization
Zeros initialization
LogSoftmax module
Holds submodules in a list.
ReLU module
ReLu6 module
RNN module
RReLU module
SELU module
A sequential container
Sigmoid module
Applies the Sigmoid Linear Unit (SiLU) function, element-wise. The SiL...
Threshold module
Packs a list of variable length Tensors
Affine_grid
Alpha_dropout
Avg_pool1d
Gaussian distribution
Creates a normal (also called Gaussian) distribution parameterized by ...
Creates a Poisson distribution parameterized by rate
, the rate param...
Generic R6 class representing distributions
Enumerate an iterator
Enumerate an iterator
Enable idiomatic access to JIT operators from R.
Saves a script_function
or script_module
in bytecode form, to be l...
Saves a script_function
to a path
Adds the 'jit_scalar' class to the input
Serialize a Script Module
Trace a module
Trace a function and return an executable script_function
.
Adds the 'jit_tuple' class to the input
Unserialize a Script Module
Computes the Cholesky decomposition of a complex Hermitian or real sym...
Computes the Cholesky decomposition of a complex Hermitian or real sym...
Applies a 3D adaptive average pooling over an input signal composed of...
AdaptiveLogSoftmaxWithLoss module
Applies a 1D adaptive max pooling over an input signal composed of sev...
Applies a 2D adaptive max pooling over an input signal composed of sev...
Avg_pool2d
Avg_pool3d
Batch_norm
Bilinear
Binary_cross_entropy_with_logits
Binary_cross_entropy
Celu
Sparsemax
Conv_tbc
Conv_transpose1d
Conv_transpose2d
Conv_transpose3d
Conv1d
Conv2d
Conv3d
Cosine_embedding_loss
Cosine_similarity
Cross_entropy
Ctc_loss
Dropout
Dropout2d
Dropout3d
Elu
Lp_pool2d
Margin_ranking_loss
Max_pool1d
Max_pool2d
Max_pool3d
Max_unpool1d
Max_unpool2d
Max_unpool3d
Mse_loss
One_hot
Smooth_l1_loss
Soft_margin_loss
Softmax
Softmin
Softplus
Softshrink
Softsign
Tanhshrink
Threshold
Creates a new Sampler
Creates a slice
Dataset wrapping tensors.
Number of threads
Abs
Absolute
Acos
Acosh
Adaptive_avg_pool1d
Add
Addbmm
Cholesky_solve
Amax
Amin
Angle
Arange
Arccos
Arccosh
Arcsin
Arcsinh
Arctan
Arctanh
Argmax
Argmin
Argsort
As_strided
Asin
Asinh
Atan
Atan2
Atanh
Atleast_1d
Atleast_2d
Atleast_3d
Avg_pool1d
Baddbmm
Bartlett_window
Bernoulli
Bincount
Bitwise_and
Bitwise_not
Bitwise_or
Bitwise_xor
Blackman_window
Block_diag
Bmm
Broadcast_tensors
Cholesky_inverse
Logical_or
Logical_xor
Logit
Sets the seed for generating random numbers.
Masked_select
Matmul
Matrix_exp
Matrix_power
Matrix_rank
Pinverse
Pixel_shuffle
Poisson
Polar
Polygamma
Pow
Prod
Promote_types
Qr
Creates the corresponding Scheme object
Roll
Rot90
Round
Rrelu_
Rsqrt
Saves an object to a disk file.
Scalar tensor
Searchsorted
Selu_
Selu
Serialize a torch object returning a raw object
Sgn
Sigmoid
Sign
Signbit
Sin
Sinh
Slogdet
Sort
Sparse_coo_tensor
Split
Sqrt
Square
Squeeze
Stack
Topk
Trace
Transpose
Trapz
Unique_consecutive
Unsafe_chunk
Unsafe_split
Unsqueeze
Vander
Conv3d
Cos
Cosh
Cosine_similarity
Count_nonzero
Cross
Cummax
Cummin
Cumprod
Cumsum
Deg2rad
Dequantize
Det
Create a Device object
Div
Divide
Dot
Dstack
Torch data types
Eig
RNG state management
Greater_equal
Greater
Gt
Hamming_window
Hann_window
Heaviside
Histc
Hstack
Hypot
I0
Integer type info
Imag
In-place version of torch_index_put
.
Modify values selected by indices
.
Index_select
Index torch tensors
A simple exported version of install_path Returns the torch installati...
Inverse
Is_complex
Is_floating_point
Verifies if torch is installed
Is_nonzero
Isclose
Isfinite
Isinf
Isnan
Isneginf
Isposinf
Isreal
Var_mean
Var
Vdot
View_as_complex
View_as_real
Vstack
Where
Exp2
Expm1
Eye
Fft
fftfreq
Ifft
Waits for all kernels in all streams on a CUDA device to complete.
Creates an iterator from a DataLoader
Get the next element of a dataloader iterator
Data loader. Combines a dataset and a sampler, and provides single- or...
Dataset Subset
Helper function to create an function that generates R6 instances of c...
Gets and sets the default floating point dtype.
Creates a Bernoulli distribution parameterized by probs
or logits
(...
Creates a categorical distribution parameterized by either probs
or ...
Creates a Chi2 distribution parameterized by shape parameter df
. Thi...
Creates a Gamma distribution parameterized by shape concentration
an...
Mixture of components in the same family
Frac
Less
Lgamma
Uniform initialization
Outer
Softsign module
Tanh module
Tanhshrink module
Adaptive_max_pool3d
Glu
Grid_sample
Group_norm
Gumbel_softmax
Hardshrink
Hardsigmoid
Hardswish
Hardtanh
Hinge_embedding_loss
Instance_norm
Sigmoid
Applies the Sigmoid Linear Unit (SiLU) function, element-wise. See `nn...
Logical_not
Negative
Nextafter
Compile TorchScript code into a graph
Loads a script_function
or script_module
previously saved with `ji...
Xavier normal initialization
LogSigmoid module
Container that allows named values
Orthogonal initialization
Load a state dict file
Given a list of values (possibly containing numbers), returns a list w...
Creates learning rate schedulers
Converts to array
Computes the sum of gradients of given tensors w.r.t. graph leaves.
Records operation history and defines formulas for differentiating ops...
Computes and returns the sum of gradients of outputs w.r.t. the inputs...
Set grad mode
Class representing the context.
CuDNN is available
CuDNN version
MKL is available
MKLDNN is available
MPS is available
OpenMP is available
Returns a dictionary of CUDA memory allocator statistics for a given d...
Call a (Potentially Unexported) Torch Function
Clone a torch module.
Abstract base class for constraints.
Contrib sort vertices
Creates a gradient scaler
Returns the index of a currently selected device.
Returns the number of GPUs available.
Empty cache
Returns the major and minor CUDA capability of device
Returns a bool indicating if CUDA is currently available.
Returns the CUDA runtime version
Install Torch from files
Install Torch
Checks if the object is a dataloader
Checks if the object is a nn_buffer
Checks if the object is an nn_module
Checks if an object is a nn_parameter
Checks if the object is a torch optimizer
Checks if object is a device
Check if object is a torch data type
Check if an object is a torch layout.
Check if an object is a memory format
Checks if an object is a QScheme
Checks if a tensor is undefined
Creates an iterable dataset
Computes the condition number of a matrix with respect to a matrix nor...
Computes the determinant of a square matrix.
Computes the eigenvalue decomposition of a square matrix if it exists.
Computes the eigenvalue decomposition of a complex Hermitian or real s...
Computes the eigenvalues of a square matrix.
Sparse initialization
Computes the eigenvalues of a complex Hermitian or real symmetric matr...
Computes the first n
columns of a product of Householder matrices.
Computes the inverse of a square matrix if it is invertible.
Computes the inverse of a square matrix if it exists.
Computes a solution to the least squares problem of a system of linear...
Step learning rate decay
Computes a matrix norm.
Computes the n
-th power of a square matrix for an integer n
.
Computes the numerical rank of a matrix.
Efficiently multiplies two or more matrices
Applies a 1D adaptive average pooling over an input signal composed of...
Computes a vector or matrix norm.
Computes the pseudoinverse (Moore-Penrose inverse) of a matrix.
Computes the QR decomposition of a matrix.
Computes the sign and natural logarithm of the absolute value of the d...
Triangular solve
Applies a 2D adaptive average pooling over an input signal composed of...
Computes the solution of a square system of linear equations with a un...
Computes the singular value decomposition (SVD) of a matrix.
Computes the singular values of a matrix.
Computes the multiplicative inverse of torch_tensordot()
Computes the solution X
to the system torch_tensordot(A, X) = B
.
Computes a vector norm.
Autocast context manager
Device contexts
Set the learning rate of each parameter group using a cosine annealing...
Sets the learning rate of each parameter group to the initial lr times...
Multiply the learning rate of each parameter group by the factor given...
Once cycle learning rate
Reduce learning rate on plateau
Applies a 1D average pooling over an input signal composed of several ...
Applies a 2D average pooling over an input signal composed of several ...
Applies a 3D average pooling over an input signal composed of several ...
BatchNorm1D module
BatchNorm2D
BatchNorm3D
Cosine embedding loss
Binary cross entropy loss
BCE with logits loss
Bilinear module
Creates a nn_buffer
CELU module
Sparsemax activation
ConvTranspose1D
ConvTranpose2D module
ConvTranpose3D module
Conv1D module
Conv2D module
Conv3D module
CrossEntropyLoss module
The Connectionist Temporal Classification loss.
Dropout module
Dropout2D module
Dropout3D module
ELU module
Embedding bag module
Embedding module
Flattens a contiguous range of dims into a tensor.
Applies a 2D fractional max pooling over an input signal composed of s...
Hardtanh module
Truncated normal initialization
Applies a 3D fractional max pooling over an input signal composed of s...
GELU module
GLU module
Group normalization
Applies a multi-layer gated recurrent unit (GRU) RNN to an input seque...
Hardshwink module
Hardsigmoid module
Hardswish module
Ones initialization
Hinge embedding loss
Identity module
Calculate gain
Constant initialization
Dirac initialization
Eye initialization
Kaiming normal initialization
Kaiming uniform initialization
Normal initialization
Kullback-Leibler divergence loss
L1 loss
Layer normalization
LeakyReLU module
Linear module
Applies a 1D power-average pooling over an input signal composed of se...
Applies a 2D power-average pooling over an input signal composed of se...
Applies a multi-layer long short-term memory (LSTM) RNN to an input se...
Margin ranking loss
MaxPool1D module
MaxPool2D module
Applies a 3D max pooling over an input signal composed of several inpu...
Computes a partial inverse of MaxPool1d
.
Computes a partial inverse of MaxPool2d
.
Computes a partial inverse of MaxPool3d
.
Base class for all neural network modules.
MSE loss
Multi margin loss
MultiHead attention
Prune top layer(s) of a network
Multilabel margin loss
Multi label soft margin loss
Nll loss
Pairwise distance
Creates an nn_parameter
Poisson NLL loss
PReLU module
Smooth L1 loss
Soft margin loss
Softmax module
Softmax2d module
Softmin
Softplus module
Softshrink module
Triplet margin loss
Triplet margin with distance loss
Unflattens a tensor dim expanding it to a desired shape. For use with ...
Upsample module
Clips gradient norm of an iterable of parameters.
Clips gradient of an iterable of parameters at specified value.
Packs a Tensor containing padded sequences of variable length.
Pads a packed batch of variable length sequences.
Pad a list of variable length Tensors with padding_value
nn_utils_weight_norm
Adaptive_avg_pool1d
Adaptive_avg_pool2d
Adaptive_avg_pool3d
Adaptive_max_pool1d
Adaptive_max_pool2d
Embedding_bag
Embedding
Fold
Fractional_max_pool2d
Fractional_max_pool3d
Gelu
Interpolate
Kl_div
L1_loss
Layer_norm
Leaky_relu
Linear
Local_response_norm
Log_softmax
Logsigmoid
Lp_pool1d
Multi head attention forward
Multi_margin_loss
Multilabel_margin_loss
Multilabel_soft_margin_loss
Nll_loss
Normalize
Pad
Pairwise_distance
Pdist
Pixel_shuffle
Poisson_nll_loss
Prelu
Relu
Relu6
Rrelu
Selu
Triplet_margin_loss
Triplet margin with distance loss
Unfold
Adadelta optimizer
Adagrad optimizer
Implements Adam algorithm.
Implements AdamW algorithm
Averaged Stochastic Gradient Descent optimizer
LibTorch implementation of Adagrad
LibTorch implementation of Adam
LibTorch implementation of AdamW
LibTorch implementation of RMSprop
LibTorch implementation of SGD
LBFGS optimizer
Dummy value indicating a required value.
Abstract Base Class for LibTorch Optimizers
Pipe operator
RMSprop optimizer
Implements the resilient backpropagation algorithm.
SGD optimizer
Abstract Base Class for LibTorch Optimizers
Creates a custom optimizer
Re-exporting the as_iterator function.
Addcdiv
Addcmul
Addmm
Addmv
Addr
Allclose
Bucketize
Can_cast
Cartesian_prod
Cat
Cdist
Ceil
Celu_
Celu
Chain_matmul
Channel_shuffle
Erfc
Cholesky
Chunk
Clamp
Clip
Clone
Combinations
Complex
Conj
Conv_tbc
Erfinv
Conv_transpose1d
Conv_transpose2d
Conv_transpose3d
Conv1d
Conv2d
Diag_embed
Diag
Diagflat
Diagonal
Computes the n-th forward difference along the given dimension.
Digamma
Dist
Exp
Einsum
Empty_like
Empty_strided
Empty
Eq
Equal
Erf
Irfft
Rfft
Floating point type info
Fix
Flatten
Flip
Fliplr
Flipud
Floor_divide
Floor
Fmod
Full_like
Full
Gather
Gcd
Ge
Create a Generator object
Geqrf
Ger
Istft
Kaiser_window
Kronecker product
Kthvalue
Creates the corresponding layout
Lcm
Le
Lerp
Less_equal
Linspace
Loads a saved object
Log
Log10
Log1p
Log2
Logaddexp
Logaddexp2
Logcumsumexp
Logdet
Logical_and
Logspace
Logsumexp
Lstsq
Lt
Lu_solve
Lu_unpack
LU
Max
Maximum
Mean
Median
Memory format
Meshgrid
Pdist
Min
Minimum
Mm
Mode
Movedim
Mul
Nonzero
Multinomial
Multiply
Mv
Mvlgamma
Nanquantile
Nansum
Narrow
Ne
Neg
Norm
Normal
Not_equal
Ones_like
Ones
Orgqr
Ormqr
Quantile
Quantize_per_channel
Quantize_per_tensor
Rad2deg
Rand_like
Rand
Result_type
Randint_like
Randint
Randn_like
Randn
Randperm
Unbind
Range
Real
Reciprocal
Creates the reduction objet
Relu_
Relu
Remainder
Renorm
Repeat_interleave
Reshape
Std_mean
Std
Stft
Sub
Subtract
Sum
Svd
T
Selects values from input at the 1-dimensional indices from indices al...
Take
Tan
Tanh
Creates a tensor from a buffer of memory
Converts R objects to a torch tensor
Tensordot
Threshold_
Triangular_solve
Tril_indices
Tril
Triu_indices
Triu
TRUE_divide
Trunc
Zeros_like
Zeros
Context-manager that enable anomaly detection for the autograd engine.
Enable grad
Temporarily modify gradient recording.
Provides functionality to define and train neural networks similar to 'PyTorch' by Paszke et al (2019) <doi:10.48550/arXiv.1912.01703> but written entirely in R using the 'libtorch' library. Also supports low-level tensor operations and 'GPU' acceleration.
Useful links