Go to file
2025-02-27 20:15:33 +00:00
.gitignore first commit 2025-02-28 01:25:10 +05:30
backend.py first commit 2025-02-28 01:25:10 +05:30
Book1.pdf first commit 2025-02-28 01:25:10 +05:30
Book2.pdf first commit 2025-02-28 01:25:10 +05:30
frontend.py first commit 2025-02-28 01:25:10 +05:30
README.md Update README.md 2025-02-27 20:15:33 +00:00
requirements.txt first commit 2025-02-28 01:25:10 +05:30

Ayurveda Chatbot using LLaMA and RAG

This project is an interactive Ayurveda chatbot that uses a Retrieval-Augmented Generation (RAG) pipeline powered by the LLaMA language model via Groq. The chatbot provides Ayurvedic knowledge and answers user queries based on pre-trained PDF content.


Features

  • PDF Knowledge Base: Pretrained on Ayurvedic texts for domain-specific answers.
  • RAG Pipeline: Combines FAISS vector retrieval and LLaMA for context-aware responses.
  • Streamlit Interface: Easy-to-use frontend for interacting with the chatbot.

Requirements

  • Python 3.8+
  • GPU support (optional but recommended for faster LLM inference)
  • LLaMA model via Groq

Installation

1. Clone the Repository

git clone https://git.digimantra.com/SHREY/AyurBot.git
cd AyurBot

2. Create and Activate a Virtual Environment

On Linux/macOS:

python3 -m venv env
source env/bin/activate

On Windows:

python -m venv env
env\Scripts\activate
  1. Install Dependencies
pip install -r requirements.txt
  1. Set .env file

Usage

  1. Preprocess PDF and Create FAISS Index Ensure the PDF file (e.g., ayurveda_text.pdf) is placed in the project directory.

Run the backend script to preprocess the data and create a FAISS index:

python3 backend.py Book1.pdf Book2.pdf --index-path faiss_index
  1. Start the Chatbot Launch the Streamlit interface:
streamlit run frontend.py

Access the chatbot in your browser at http://localhost:8501.