AyurBot/README.md
2025-02-28 01:34:19 +05:30

1.5 KiB

Ayurveda Chatbot using LLaMA and RAG

This project is an interactive Ayurveda chatbot that uses a Retrieval-Augmented Generation (RAG) pipeline powered by the LLaMA language model via Ollama. The chatbot provides Ayurvedic knowledge and answers user queries based on pre-trained PDF content.


Features

  • PDF Knowledge Base: Pretrained on Ayurvedic texts for domain-specific answers.
  • RAG Pipeline: Combines FAISS vector retrieval and LLaMA for context-aware responses.
  • Streamlit Interface: Easy-to-use frontend for interacting with the chatbot.

Requirements

  • Python 3.8+
  • GPU support (optional but recommended for faster LLM inference)
  • LLaMA model via Ollama

Installation

1. Clone the Repository

git clone https://git.digimantra.com/SHREY/AyurBot.git
cd ayurvedachatbot

2. Create and Activate a Virtual Environment

On Linux/macOS:

python3 -m venv env
source env/bin/activate

On Windows:

python -m venv env
env\Scripts\activate
  1. Install Dependencies
pip install -r requirements.txt
  1. Set .env file

Usage

  1. Preprocess PDF and Create FAISS Index Ensure the PDF file (e.g., ayurveda_text.pdf) is placed in the project directory.

Run the backend script to preprocess the data and create a FAISS index:

python3 backend.py Book1.pdf Book2.pdf --index-path faiss_index
  1. Start the Chatbot Launch the Streamlit interface:
streamlit run frontend.py

Access the chatbot in your browser at http://localhost:8501.