AutoRAG-Research¶
Automate your RAG research with reproducible benchmarks.
What is AutoRAG-Research?¶
A Python framework for:
- Running RAG benchmarks on standard datasets
- Evaluating retrieval and generation pipelines
- Comparing algorithms with reproducible metrics
Quick Start¶
pip install autorag-research
docker-compose up -d
autorag-research data restore beir scifact_openai-small
autorag-research run --db-name=beir_scifact_test_openai_small
Choose Your Path¶
| I want to... | Go to |
|---|---|
| Run text retrieval benchmarks | Text Retrieval Tutorial |
| Run full RAG with generation | Text RAG Tutorial |
| Work with visual documents | Multimodal Tutorial |
| Use my own dataset | Custom Dataset Tutorial |
| Test my own pipeline | Custom Pipeline Tutorial |
| Create my own metric | Custom Metric Tutorial |
Documentation¶
- Learn - Core concepts and architecture
- Tutorial - Step-by-step guides
- Datasets - Available benchmarks
- Pipelines - Retrieval and generation algorithms
- Metrics - Evaluation measures
- CLI Reference - Command-line usage
- API Reference - Python API