This is a simple Retrieval-Augmented Generation (RAG) system that allows you to ask questions based on your university files (tasks, lectures, presentations, code, etc.).
It uses Flask, LlamaIndex, Ollama, ChromaDB, and HuggingFace embeddings to answer questions.
- Upload your university materials in one folder
- Ask questions about them via an API (
/api/question
) - Uses LLMs (like
llama2:7b
) and semantic search - Prompt optimized for Polish academic context
.
├── api.py # Flask app (API endpoint)
├── model.py # LLM + embedding + index + query logic
├── config.py # Configuration loaded from .env
├── .env # Environment variables
├── /docs # Folder with university documents
└── requirements.txt # Python dependencies
git clone <your-repo-url>
cd <project-folder>
pip install -r requirements.txt
Manually install below module to get rid of dependency install issues with requirements.txt
pip install llama-index-vector-stores-chroma
Example:
DOCS_DIR=./docs
HTTP_PORT=7654
CHROMA_HOST=localhost
CHROMA_PORT=8000
Put your .pdf
, .docx
, .txt
, .md
, or .py
files in the ./docs
folder.
python api.py
Then, send a POST request to:
POST http://localhost:7654/api/question
With JSON body:
{
"question": "Jak działa algorytm Dijkstry?"
}
To run it in command-line mode:
python model.py
And ask questions interactively.
- LlamaIndex
- Ollama (
llama2:7b
used) - ChromaDB
- Flask
- HuggingFace Embeddings