Retrieval-Augmented Generation (RAG) Workflows
Merge overlapping chunks in retrieved results
Uses Azure AI Foundry to create embeddings
Retrieve VSS and BM25
Launch the Ragnar Store Inspector
Chunk text
View chunks with the store inspector
Embed text using a Bedrock model
Embed text using a Databricks model
Embed using Google Vertex API platform
Embed Text
Retrieves chunks using the BM25 score
Generate embeddings using Snowflake
Chunk a Markdown document
Segment markdown text
Markdown documents
Markdown documents chunks
Serve a Ragnar store over MCP
Find links on a page
Read an HTML document
Read a document as Markdown
Register a 'retrieve' tool with ellmer
Inserts or updates chunks in a RagnarStore
Vector Similarity Search Retrieval
Retrieve chunks from a RagnarStore
Visualize a store using Embedding Atlas
Build a Ragnar Store index
Create and connect to a vector store
Concurrently ingest documents into a Ragnar store
ragnar: Retrieval-Augmented Generation (RAG) Workflows
Convert files to Markdown
Provides tools for implementing Retrieval-Augmented Generation (RAG) workflows with Large Language Models (LLM). Includes functions for document processing, text chunking, embedding generation, storage management, and content retrieval. Supports various document types and embedding providers ('Ollama', 'OpenAI'), with 'DuckDB' as the default storage backend. Integrates with the 'ellmer' package to equip chat objects with retrieval capabilities. Designed to offer both sensible defaults and customization options with transparent access to intermediate outputs. For a review of retrieval-augmented generation methods, see Gao et al. (2023) "Retrieval-Augmented Generation for Large Language Models: A Survey" <doi:10.48550/arXiv.2312.10997>.
Useful links