Skip to content

An AI-app that allows you to upload a PDF and ask questions about it. It uses StableVicuna 13B and runs locally.

Notifications You must be signed in to change notification settings

wafflecomposite/langchain-ask-pdf-local

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ask Your PDF, locally

UI screenshot of Ask Your PDF
Answering question about 2303.12712 paper 7mb pdf file

This is an attempt to recreate Alejandro AO's langchain-ask-pdf (also check out his tutorial on YT) using open source models running locally.

It uses all-MiniLM-L6-v2 instead of OpenAI Embeddings, and StableVicuna-13B instead of OpenAI models.

It runs on the CPU, is impractically slow and was created more as an experiment, but I am still fairly happy with the results.

Requirements

GPU is not used and is not required.

You can squeeze it into 16 GB of RAM, but I recommend 24 GB or more.

Installation

  • Install requirements (preferably to venv): pip install -r requirements.txt

  • Download stable-vicuna-13B.ggml.q4_2.bin from TheBloke/stable-vicuna-13B-GGML and place it in project folder.

Usage

Run streamlit run .\app.py

This should launch the UI in your default browser. Select a PDF file, send the question, wait patiently.

About

An AI-app that allows you to upload a PDF and ask questions about it. It uses StableVicuna 13B and runs locally.

Topics

Resources

Stars

Watchers

Forks

Languages