edgemodelr0.1.6 package

Local Large Language Model Inference Engine

Enables R users to run large language models locally using 'GGUF' model files and the 'llama.cpp' inference engine. Provides a complete R interface for loading models, generating text completions, and streaming responses in real-time. Supports local inference without requiring cloud APIs or internet connectivity, ensuring complete data privacy and control. Based on the 'llama.cpp' project by Georgi Gerganov (2023) <https://github.com/ggml-org/llama.cpp>.

  • Maintainer: Pawan Rama Mali
  • License: MIT + file LICENSE
  • Last published: 2026-02-07 06:10:49 UTC