'GGML' Tensor Operations for Machine Learning
Dequantize Row (IQ)
Dequantize Row (MXFP4)
Dequantize Row (K-quants)
Dequantize Row (Q4_0)
Dequantize Row (Ternary)
Check if R Abort Handler is Enabled
Absolute Value In-place (Graph)
Absolute Value (Graph)
Element-wise Addition In-place (Graph)
Add tensors
Add Scalar to Tensor (Graph)
Check if Two Tensors Have the Same Layout
Compare Tensor Shapes
Compare Tensor Strides
Argmax (Graph)
Argsort - Get Sorting Indices (Graph)
Allocate Context Tensors to Backend
Clear buffer memory
Free Backend Buffer
Get Backend Buffer Size
Get buffer usage
Check if buffer is host memory
Check if buffer is a multi-buffer
Get Backend Buffer Name
Reset buffer
Set buffer usage hint
Buffer usage: Any
Buffer usage: Compute
Buffer usage: Weights
Initialize CPU Backend
Set CPU Backend Threads
Get device by name
Get device by type
Get number of available devices
Get device description
Get device properties
Get device by index
Initialize backend from device
Get device memory
Get device name
Check if device should offload operation
Check if device supports buffer type
Check if device supports operation
Get device type
Register a device
Device type: Accelerator
Device type: CPU
Device type: GPU
Device type: Integrated GPU
Free event
Create new event
Record event
Synchronize event
Wait for event
Free Backend
Get device from backend
Compute graph asynchronously
Compute Graph with Backend
Execute graph plan
Create graph execution plan
Free graph execution plan
Initialize best available backend
Initialize backend by name
Initialize backend by type
Load all available backends
Load backend from dynamic library
Allocate multi-buffer
Set usage for all buffers in a multi-buffer
Get Backend Name
Get backend registry by name
Get number of registered backends
Get number of devices in registry
Get device from registry
Get backend registry by index
Get registry name
Register a backend
Allocate graph on scheduler
Free backend scheduler
Get backend from scheduler
Get number of backends in scheduler
Get number of tensor copies
Get number of graph splits
Get tensor backend assignment
Compute graph asynchronously
Compute graph using scheduler
Create a new backend scheduler
Reserve memory for scheduler
Reset scheduler
Set tensor backend assignment
Synchronize scheduler
Synchronize backend
Copy tensor asynchronously between backends
Get tensor data asynchronously
Get Tensor Data via Backend
Set tensor data asynchronously
Set Tensor Data via Backend
Unload backend
Get Block Size
Build forward expand
Check If Tensor Can Be Repeated
Ceiling In-place (Graph)
Ceiling (Graph)
Clamp (Graph)
Concatenate Tensors (Graph)
Make Contiguous (Graph)
1D Convolution (Graph)
2D Convolution (Graph)
Transposed 1D Convolution (Graph)
Cosine (Graph)
Count Equal Elements (Graph)
Element-wise Addition (CPU Direct)
Get All CPU Features
Get RISC-V Vector Length
Get SVE Vector Length (ARM)
CPU Feature Detection - AMX INT8
CPU Feature Detection - ARM FMA
CPU Feature Detection - AVX-VNNI
CPU Feature Detection - AVX
CPU Feature Detection - AVX2
CPU Feature Detection - AVX-512 BF16
CPU Feature Detection - AVX-512 VBMI
CPU Feature Detection - AVX-512 VNNI
CPU Feature Detection - AVX-512
CPU Feature Detection - BMI2
CPU Feature Detection - Dot Product (ARM)
CPU Feature Detection - F16C
CPU Feature Detection - FMA
CPU Feature Detection - FP16 Vector Arithmetic (ARM)
CPU Feature Detection - Llamafile
CPU Feature Detection - INT8 Matrix Multiply (ARM)
CPU Feature Detection - NEON (ARM)
CPU Feature Detection - RISC-V Vector
CPU Feature Detection - SME (ARM)
CPU Feature Detection - SSE3
CPU Feature Detection - SSSE3
CPU Feature Detection - SVE (ARM)
CPU Feature Detection - VSX (PowerPC)
CPU Feature Detection - VXE (IBM z/Architecture)
CPU Feature Detection - WebAssembly SIMD
Element-wise Multiplication (CPU Direct)
Copy Tensor with Type Conversion (Graph)
Get CPU Cycles per Millisecond
Get CPU Cycles
Diagonal Mask with -Inf In-place (Graph)
Diagonal Mask with -Inf (Graph)
Diagonal Mask with Zero (Graph)
Diagonal Matrix (Graph)
Element-wise Division In-place (Graph)
Element-wise Division (Graph)
Duplicate Tensor In-place (Graph)
Duplicate Tensor
Duplicate Tensor (Graph)
Get Element Size
ELU Activation In-place (Graph)
ELU Activation (Graph)
Estimate Required Memory
Exponential In-place (Graph)
Exponential (Graph)
Flash Attention Backward (Graph)
Flash Attention (Graph)
Floor In-place (Graph)
Floor (Graph)
Free GGML context
Convert ftype to ggml_type
Allocate Memory for Graph
Free Graph Allocator
Get Graph Allocator Buffer Size
Create Graph Allocator
Reserve Memory for Graph
GeGLU Quick (Fast GeGLU) (Graph)
GeGLU Split (Graph)
GeGLU (GELU Gated Linear Unit) (Graph)
Exact GELU Activation (Graph)
GELU Activation In-place (Graph)
GELU Quick Activation (Graph)
GELU Activation (Graph)
Get F32 data
Get I32 Data
Get Maximum Tensor Size
Get Context Memory Size
Get Number of Threads
Get Tensor Name
Get No Allocation Mode
Get Float Op Parameter
Get Integer Op Parameter
Get Tensor Operation Parameters
Get Rows Backward (Graph)
Get Rows by Indices (Graph)
Get Unary Operation from Tensor
GLU Operation Types
Generic GLU Split (Graph)
Generic GLU (Gated Linear Unit) (Graph)
Compute Graph with Context (Alternative Method)
Compute graph
Export Graph to DOT Format
Get Tensor from Graph by Name
Get Number of Nodes in Graph
Get Graph Node
Get Graph Overhead
Print Graph Information
Reset Graph (for backpropagation)
Create a View of a Subgraph
Group Normalization In-place (Graph)
Group Normalization (Graph)
Hard Sigmoid Activation (Graph)
Hard Swish Activation (Graph)
Image to Column (Graph)
Create Context with Auto-sizing
Initialize GGML context
Check if GGML is available
Check Tensor Contiguity (Dimension 0)
Check Tensor Contiguity (Dimensions >= 1)
Check Tensor Contiguity (Dimensions >= 2)
Check Channel-wise Contiguity
Check Row-wise Contiguity
Check if Tensor is Contiguous
Check If Tensor is Contiguously Allocated
Check if Tensor is Permuted
Check If Type is Quantized
Check if Tensor is Transposed
L2 Normalization In-place (Graph)
L2 Normalization (Graph)
Leaky ReLU Activation (Graph)
Natural Logarithm In-place (Graph)
Check if R Logging is Enabled
Restore Default GGML Logging
Enable R-compatible GGML Logging
Natural Logarithm (Graph)
Mean (Graph)
Element-wise Multiplication In-place (Graph)
Matrix Multiplication with Expert Selection (Graph)
Matrix Multiplication (Graph)
Multiply tensors
Get Number of Dimensions
Get number of bytes
Negation In-place (Graph)
Negation (Graph)
Get number of elements
Create Scalar F32 Tensor
Create Scalar I32 Tensor
Create 1D tensor
Create 2D tensor
Create 3D Tensor
Create 4D Tensor
Create Tensor with Arbitrary Dimensions
Layer Normalization In-place (Graph)
Layer Normalization (Graph)
Get Number of Rows
Check if Operation Can Be Done In-place
Get Operation Description from Tensor
Get Operation Name
Get Operation Symbol
Allocate graph for evaluation
Get optimizer type from context
Get data tensor from dataset
Free optimization dataset
Get batch from dataset
Create a new optimization dataset
Get labels tensor from dataset
Get number of datapoints in dataset
Shuffle dataset
Get default optimizer parameters
Run one training epoch
Evaluate model
Fit model to dataset
Free optimizer context
Get gradient accumulator for a tensor
Initialize optimizer context
Get inputs tensor from optimizer context
Get labels tensor from optimizer context
Loss type: Cross Entropy
Loss type: Mean
Loss type: Mean Squared Error
Loss type: Sum
Get loss tensor from optimizer context
Get number of correct predictions tensor
Get optimizer name
Optimizer type: AdamW
Optimizer type: SGD
Get outputs tensor from optimizer context
Get predictions tensor from optimizer context
Prepare allocation for non-static graphs
Reset optimizer context
Get accuracy from result
Free optimization result
Initialize optimization result
Get loss from result
Get number of datapoints from result
Get predictions from result
Reset optimization result
Check if using static graphs
Outer Product (Graph)
Pad Tensor with Zeros (Graph)
Permute Tensor Dimensions (Graph)
1D Pooling (Graph)
2D Pooling (Graph)
Print Context Memory Status
Print Objects in Context
Get Quantization Block Info
Quantize Data Chunk
Free Quantization Resources
Initialize Quantization Tables
Check if Quantization Requires Importance Matrix
ReGLU Split (Graph)
ReGLU (ReLU Gated Linear Unit) (Graph)
ReLU Activation In-place (Graph)
ReLU Activation (Graph)
Repeat Backward (Graph)
Repeat (Graph)
Reset GGML Context
Reshape to 1D (Graph)
Reshape to 2D (Graph)
Reshape to 3D (Graph)
Reshape to 4D (Graph)
RMS Norm Backward (Graph)
RMS Normalization In-place (Graph)
RMS Normalization (Graph)
RoPE Extended Backward (Graph)
Extended RoPE Inplace (Graph)
Extended RoPE with Frequency Scaling (Graph)
Rotary Position Embedding In-place (Graph)
Multi-RoPE Inplace (Graph)
Multi-RoPE for Vision Models (Graph)
Rotary Position Embedding (Graph)
Round In-place (Graph)
Round (Graph)
Scale Tensor In-place (Graph)
Scale (Graph)
Set 1D Tensor Region (Graph)
Set 2D Tensor Region (Graph)
Restore Default Abort Behavior
Enable R-compatible Abort Handling
Set F32 data
Set I32 Data
Set Number of Threads
Set Tensor Name
Set No Allocation Mode
Set Float Op Parameter
Set Integer Op Parameter
Set Tensor Operation Parameters
Set Tensor to Zero
Set Tensor Region (Graph)
Sign Function (Graph)
Sigmoid Activation In-place (Graph)
Sigmoid Activation (Graph)
SiLU Backward (Graph)
SiLU Activation In-place (Graph)
SiLU Activation (Graph)
Sine (Graph)
Extended Softmax Backward Inplace (Graph)
Softmax Backward Extended (Graph)
Extended Softmax Inplace (Graph)
Extended Softmax with Masking and Scaling (Graph)
Softmax In-place (Graph)
Softmax (Graph)
Softplus Activation In-place (Graph)
Softplus Activation (Graph)
Sort Order Constants
Square In-place (Graph)
Square (Graph)
Square Root In-place (Graph)
Square Root (Graph)
Step Function (Graph)
Element-wise Subtraction In-place (Graph)
Element-wise Subtraction (Graph)
Sum Rows (Graph)
Sum (Graph)
SwiGLU Split (Graph)
SwiGLU (Swish/SiLU Gated Linear Unit) (Graph)
Tanh Activation In-place (Graph)
Tanh Activation (Graph)
Get Tensor Overhead
Get Tensor Shape
Get Tensor Type
Test GGML
Initialize GGML Timer
Get Time in Milliseconds
Get Time in Microseconds
Top-K Indices (Graph)
Transpose (Graph)
GGML Data Types
Get Type Name
Get Type Size in Bytes
Get Type Size as Float
Get Unary Operation Name
Upscale Tensor (Graph)
Get Used Memory
Get GGML version
1D View with Byte Offset (Graph)
2D View with Byte Offset (Graph)
3D View with Byte Offset (Graph)
4D View with Byte Offset (Graph)
View Tensor
Check if Vulkan support is available
Get Vulkan backend name
Get number of Vulkan devices
Get Vulkan device description
Get Vulkan device memory
Free Vulkan backend
Initialize Vulkan backend
Check if backend is Vulkan
List all Vulkan devices
Print Vulkan status
Execute with Temporary Context
ggmlR: 'GGML' Tensor Operations for Machine Learning
Free IQ2 Quantization Tables
Initialize IQ2 Quantization Tables
Free IQ3 Quantization Tables
Initialize IQ3 Quantization Tables
Quantize Data (IQ)
Quantize Data (MXFP4)
Quantize Data (K-quants)
Quantize Data (Q4_0)
Quantize Row Reference (IQ)
Quantize Row Reference (MXFP4)
Quantize Row Reference (K-quants)
Quantize Row Reference (Basic)
Quantize Row Reference (Ternary)
Quantize Data (Ternary)
RoPE Mode Constants
Provides 'R' bindings to the 'GGML' tensor library for efficient machine learning computation. Implements core tensor operations including element-wise arithmetic, reshaping, and matrix multiplication. Supports neural network layers (attention, convolutions, normalization), activation functions, and quantization. Features optimization/training API with 'AdamW' (Adam with Weight decay) and 'SGD' (Stochastic Gradient Descent) optimizers, 'MSE' (Mean Squared Error) and cross-entropy losses. Multi-backend support with CPU and optional 'Vulkan' GPU (Graphics Processing Unit) acceleration. See <https://github.com/ggml-org/ggml> for more information about the underlying library.