localLLM1.1.0 package

Running Local LLMs with 'llama.cpp' Backend

annotation_sink_csv

Create a CSV sink for streaming annotation chunks

apply_chat_template

Apply Chat Template to Format Conversations

apply_gemma_chat_template

Apply Gemma-Compatible Chat Template

backend_free

Free localLLM backend

backend_init

Initialize localLLM backend

compute_confusion_matrices

Compute confusion matrices from multi-model annotations

context_create

Create Inference Context for Text Generation

detokenize

Convert Token IDs Back to Text

document_end

Finish automatic run documentation

document_start

Start automatic run documentation

dot-with_hf_token

Temporarily apply an HF token for a scoped operation

download_model

Download a model manually

explore

Compare multiple LLMs over a shared set of prompts

generate_parallel

Generate Text in Parallel for Multiple Prompts

generate

Generate Text Using Language Model Context

get_lib_path

Get Backend Library Path

get_model_cache_dir

Get the model cache directory

hardware_profile

Inspect detected hardware resources

install_localLLM

Install localLLM Backend Library

intercoder_reliability

Intercoder reliability for LLM annotations

lib_is_installed

Check if Backend Library is Installed

list_cached_models

List cached models on disk

list_ollama_models

List GGUF models managed by Ollama

localLLM-package

R Interface to llama.cpp with Runtime Library Loading

model_load

Load Language Model with Automatic Download Support

quick_llama_reset

Reset quick_llama state

quick_llama

Quick LLaMA Inference

set_hf_token

Configure Hugging Face access token

smart_chat_template

Smart Chat Template Application

tokenize_test

Test tokenize function (debugging)

tokenize

Convert Text to Token IDs

validate

Validate model predictions against gold labels and peer agreement

Provides R bindings to the 'llama.cpp' library for running large language models. The package uses a lightweight architecture where the C++ backend library is downloaded at runtime rather than bundled with the package. Package features include text generation, reproducible generation, and parallel inference.

  • Maintainer: Yaosheng Xu
  • License: MIT + file LICENSE
  • Last published: 2025-12-17