.gitignore | ||
backend.py | ||
Book1.pdf | ||
Book2.pdf | ||
frontend.py | ||
README.md | ||
requirements.txt |
Ayurveda Chatbot using LLaMA and RAG
This project is an interactive Ayurveda chatbot that uses a Retrieval-Augmented Generation (RAG) pipeline powered by the LLaMA language model via Groq. The chatbot provides Ayurvedic knowledge and answers user queries based on pre-trained PDF content.
Features
- PDF Knowledge Base: Pretrained on Ayurvedic texts for domain-specific answers.
- RAG Pipeline: Combines FAISS vector retrieval and LLaMA for context-aware responses.
- Streamlit Interface: Easy-to-use frontend for interacting with the chatbot.
Requirements
- Python 3.8+
- GPU support (optional but recommended for faster LLM inference)
- LLaMA model via Groq
Installation
1. Clone the Repository
git clone https://git.digimantra.com/SHREY/AyurBot.git
cd AyurBot
2. Create and Activate a Virtual Environment
On Linux/macOS:
python3 -m venv env
source env/bin/activate
On Windows:
python -m venv env
env\Scripts\activate
- Install Dependencies
pip install -r requirements.txt
- Set .env file
Usage
- Preprocess PDF and Create FAISS Index Ensure the PDF file (e.g., ayurveda_text.pdf) is placed in the project directory.
Run the backend script to preprocess the data and create a FAISS index:
python3 backend.py Book1.pdf Book2.pdf --index-path faiss_index
- Start the Chatbot Launch the Streamlit interface:
streamlit run frontend.py
Access the chatbot in your browser at http://localhost:8501.