BasicRAG¶
Simple single-call RAG: retrieve once, build prompt, generate once.
Overview¶
| Field | Value |
|---|---|
| Type | Generation |
| Algorithm | Retrieve + Generate |
| Modality | Text |
How It Works¶
- Retrieve top-k documents using configured retrieval pipeline
- Build context from retrieved documents
- Generate answer using LLM with prompt template
Configuration¶
_target_: autorag_research.pipelines.generation.basic_rag.BasicRAGPipelineConfig
name: basic_rag
retrieval_pipeline_name: bm25
llm: gpt-4o-mini
prompt_template: |
Context:
{context}
Question: {query}
Answer:
top_k: 5
batch_size: 100
Options¶
| Option | Type | Default | Description |
|---|---|---|---|
| name | str | required | Unique pipeline instance name |
| retrieval_pipeline_name | str | required | Name of retrieval pipeline to use |
| llm | str or BaseLLM | required | LLM instance or config name |
| prompt_template | str | default | Template with {context} and {query} |
| top_k | int | 10 | Documents to retrieve |
| batch_size | int | 100 | Queries per batch |
Prompt Template Variables¶
| Variable | Description |
|---|---|
{context} |
Retrieved document contents |
{query} |
Original query |
When to Use¶
Good for:
- Simple Q&A tasks
- Baseline RAG implementation
- Quick prototyping
Consider advanced pipelines for:
- Multi-hop reasoning
- Iterative retrieval
- Complex answer synthesis
Citation¶
@article{lewis2020retrieval,
title={Retrieval-augmented generation for knowledge-intensive nlp tasks},
author={Lewis, Patrick and Perez, Ethan and Piktus, Aleksandra and Petroni, Fabio and Karpukhin, Vladimir and Goyal, Naman and K{\"u}ttler, Heinrich and Lewis, Mike and Yih, Wen-tau and Rockt{\"a}schel, Tim and others},
journal={Advances in neural information processing systems},
volume={33},
pages={9459--9474},
year={2020}
}