'Ollama' Language Models
Append message to a list
Generate a chat completion with message history
Check if an option is valid
Check if a vector of options are valid
Copy a model
Create a message
Create a list of messages
Create a httr2 request object
Create a model from a Modelfile
Delete a message in a specified position from a list
Delete a model and its data
Generate embedding for inputs
Generate embeddings for a single prompt - deprecated in favor of `embe...
Encode images in messages to base64 format
Generate a response for a given prompt
Get tool calls helper function
Read image file and encode it to base64
Insert message into a list at a specified position
List models that are available locally
Check if model is available locally
Model options
Chat with a model in real-time in R console
Package configuration
Prepend message to a list
List models that are currently loaded into memory
Pull/download a model from the Ollama library
Push or upload a model to a model library
Process httr2 response object for streaming
Process httr2 response object
Search for options based on a query
Show model information
Stream handler helper function
Test connection to Ollama server
Validate a message
Validate a list of messages
Validate additional options or parameters provided to the API call
An interface to easily run local language models with 'Ollama' <https://ollama.com> server and API endpoints (see <https://github.com/ollama/ollama/blob/main/docs/api.md> for details). It lets you run open-source large language models locally on your machine.
Useful links