Installation
It is important that you review the Main Concepts section to understand the different components of PrivateGPT and how they interact with each other.
Base requirements to run PrivateGPT
1. Clone the PrivateGPT Repository
Clone the repository and navigate to it:
2. Install Python 3.11
If you do not have Python 3.11 installed, install it using a Python version manager like pyenv
. Earlier Python versions are not supported.
macOS/Linux
Install and set Python 3.11 using pyenv:
Windows
Install and set Python 3.11 using pyenv-win:
3. Install Poetry
Install Poetry for dependency management: Follow the instructions on the official Poetry website to install it.
A bug exists in Poetry versions 1.7.0 and earlier. We strongly recommend upgrading to a tested version.
To upgrade Poetry to latest tested version, run poetry self update 1.8.3
after installing it.
4. Optional: Install make
To run various scripts, you need to install make
. Follow the instructions for your operating system:
macOS
(Using Homebrew):
Windows
(Using Chocolatey):
Install and Run Your Desired Setup
PrivateGPT allows customization of the setup, from fully local to cloud-based, by deciding the modules to use. To install only the required dependencies, PrivateGPT offers different extras
that can be combined during the installation process:
Where <extra>
can be any of the following options described below.
Available Modules
You need to choose one option per category (LLM, Embeddings, Vector Stores, UI). Below are the tables listing the available options for each category.
LLM
Embeddings
Vector Stores
UI
A working Gradio UI client is provided to test the API, together with a set of useful tools such as bulk model download script, ingestion script, documents folder watch, etc. Please refer to the UI alternatives page for more UI alternatives.
Recommended Setups
There are just some examples of recommended setups. You can mix and match the different options to fit your needs. You’ll find more information in the Manual section of the documentation.
Important for Windows: In the examples below or how to run PrivateGPT with
make run
,PGPT_PROFILES
env var is being set inline following Unix command line syntax (works on MacOS and Linux). If you are using Windows, you’ll need to set the env var in a different way, for example:
or
Refer to the troubleshooting section for specific issues you might encounter.
Local, Ollama-powered setup - RECOMMENDED
The easiest way to run PrivateGPT fully locally is to depend on Ollama for the LLM. Ollama provides local LLM and Embeddings super easy to install and use, abstracting the complexity of GPU support. It’s the recommended setup for local development.
Go to ollama.ai and follow the instructions to install Ollama on your machine.
After the installation, make sure the Ollama desktop app is closed.
Now, start Ollama service (it will start a local inference server, serving both the LLM and the Embeddings):
Install the models to be used, the default settings-ollama.yaml is configured to user llama3.1 8b LLM (~4GB) and nomic-embed-text Embeddings (~275MB)
By default, PGPT will automatically pull models as needed. This behavior can be changed by modifying the ollama.autopull_models
property.
In any case, if you want to manually pull models, run the following commands:
Once done, on a different terminal, you can install PrivateGPT with the following command:
Once installed, you can run PrivateGPT. Make sure you have a working Ollama running locally before running the following command.
PrivateGPT will use the already existing settings-ollama.yaml
settings file, which is already configured to use Ollama LLM and Embeddings, and Qdrant. Review it and adapt it to your needs (different models, different Ollama port, etc.)
The UI will be available at http://localhost:8001
Private, Sagemaker-powered setup
If you need more performance, you can run a version of PrivateGPT that relies on powerful AWS Sagemaker machines to serve the LLM and Embeddings.
You need to have access to sagemaker inference endpoints for the LLM and / or the embeddings, and have AWS credentials properly configured.
Edit the settings-sagemaker.yaml
file to include the correct Sagemaker endpoints.
Then, install PrivateGPT with the following command:
Once installed, you can run PrivateGPT. Make sure you have a working Ollama running locally before running the following command.
PrivateGPT will use the already existing settings-sagemaker.yaml
settings file, which is already configured to use Sagemaker LLM and Embeddings endpoints, and Qdrant.
The UI will be available at http://localhost:8001
Non-Private, OpenAI-powered test setup
If you want to test PrivateGPT with OpenAI’s LLM and Embeddings -taking into account your data is going to OpenAI!- you can run the following command:
You need an OPENAI API key to run this setup.
Edit the settings-openai.yaml
file to include the correct API KEY. Never commit it! It’s a secret! As an alternative to editing settings-openai.yaml
, you can just set the env var OPENAI_API_KEY.
Then, install PrivateGPT with the following command:
Once installed, you can run PrivateGPT.
PrivateGPT will use the already existing settings-openai.yaml
settings file, which is already configured to use OpenAI LLM and Embeddings endpoints, and Qdrant.
The UI will be available at http://localhost:8001
Non-Private, Azure OpenAI-powered test setup
If you want to test PrivateGPT with Azure OpenAI’s LLM and Embeddings -taking into account your data is going to Azure OpenAI!- you can run the following command:
You need to have access to Azure OpenAI inference endpoints for the LLM and / or the embeddings, and have Azure OpenAI credentials properly configured.
Edit the settings-azopenai.yaml
file to include the correct Azure OpenAI endpoints.
Then, install PrivateGPT with the following command:
Once installed, you can run PrivateGPT.
PrivateGPT will use the already existing settings-azopenai.yaml
settings file, which is already configured to use Azure OpenAI LLM and Embeddings endpoints, and Qdrant.
The UI will be available at http://localhost:8001
Local, Llama-CPP powered setup
If you want to run PrivateGPT fully locally without relying on Ollama, you can run the following command:
In order for local LLM and embeddings to work, you need to download the models to the models
folder. You can do so by running the setup
script:
Once installed, you can run PrivateGPT with the following command:
PrivateGPT will load the already existing settings-local.yaml
file, which is already configured to use LlamaCPP LLM, HuggingFace embeddings and Qdrant.
The UI will be available at http://localhost:8001
Llama-CPP support
For PrivateGPT to run fully locally without Ollama, Llama.cpp is required and in particular llama-cpp-python is used.
You’ll need to have a valid C++ compiler like gcc installed. See Troubleshooting: C++ Compiler for more details.
It’s highly encouraged that you fully read llama-cpp and llama-cpp-python documentation relevant to your platform. Running into installation issues is very likely, and you’ll need to troubleshoot them yourself.
Llama-CPP OSX GPU support
You will need to build llama.cpp with metal support.
To do that, you need to install llama.cpp
python’s binding llama-cpp-python
through pip, with the compilation flag
that activate METAL
: you have to pass -DLLAMA_METAL=on
to the CMake command tha pip
runs for you (see below).
In other words, one should simply run:
The above command will force the re-installation of llama-cpp-python
with METAL
support by compiling
llama.cpp
locally with your METAL
libraries (shipped by default with your macOS).
More information is available in the documentation of the libraries themselves:
Llama-CPP Windows NVIDIA GPU support
Windows GPU support is done through CUDA. Follow the instructions on the original llama.cpp repo to install the required dependencies.
Some tips to get it working with an NVIDIA card and CUDA (Tested on Windows 10 with CUDA 11.5 RTX 3070):
- Install latest VS2022 (and build tools) https://visualstudio.microsoft.com/vs/community/
- Install CUDA toolkit https://developer.nvidia.com/cuda-downloads
- Verify your installation is correct by running
nvcc --version
andnvidia-smi
, ensure your CUDA version is up to date and your GPU is detected. - [Optional] Install CMake to troubleshoot building issues by compiling llama.cpp directly https://cmake.org/download/
If you have all required dependencies properly configured running the following powershell command should succeed.
If your installation was correct, you should see a message similar to the following next
time you start the server BLAS = 1
. If there is some issue, please refer to the
troubleshooting section.
Note that llama.cpp offloads matrix calculations to the GPU but the performance is still hit heavily due to latency between CPU and GPU communication. You might need to tweak batch sizes and other parameters to get the best performance for your particular system.
Llama-CPP Linux NVIDIA GPU support and Windows-WSL
Linux GPU support is done through CUDA. Follow the instructions on the original llama.cpp repo to install the required external dependencies.
Some tips:
- Make sure you have an up-to-date C++ compiler
- Install CUDA toolkit https://developer.nvidia.com/cuda-downloads
- Verify your installation is correct by running
nvcc --version
andnvidia-smi
, ensure your CUDA version is up to date and your GPU is detected.
After that running the following command in the repository will install llama.cpp with GPU support:
If your installation was correct, you should see a message similar to the following next
time you start the server BLAS = 1
. If there is some issue, please refer to the
troubleshooting section.
Llama-CPP Linux AMD GPU support
Linux GPU support is done through ROCm. Some tips:
- Install ROCm from quick-start install guide
- Install PyTorch for ROCm
- Install bitsandbytes for ROCm
After that running the following command in the repository will install llama.cpp with GPU support:
If your installation was correct, you should see a message similar to the following next time you start the server BLAS = 1
.
Llama-CPP Known issues and Troubleshooting
Execution of LLMs locally still has a lot of sharp edges, specially when running on non Linux platforms. You might encounter several issues:
- Performance: RAM or VRAM usage is very high, your computer might experience slowdowns or even crashes.
- GPU Virtualization on Windows and OSX: Simply not possible with docker desktop, you have to run the server directly on the host.
- Building errors: Some of PrivateGPT dependencies need to build native code, and they might fail on some platforms. Most likely you are missing some dev tools in your machine (updated C++ compiler, CUDA is not on PATH, etc.). If you encounter any of these issues, please open an issue and we’ll try to help.
One of the first reflex to adopt is: get more information. If, during your installation, something does not go as planned, retry in verbose mode, and see what goes wrong.
For example, when installing packages with pip install
, you can add the option -vvv
to show the details of the installation.
Llama-CPP Troubleshooting: C++ Compiler
If you encounter an error while building a wheel during the pip install
process, you may need to install a C++
compiler on your computer.
For Windows 10/11
To install a C++ compiler on Windows 10/11, follow these steps:
- Install Visual Studio 2022.
- Make sure the following components are selected:
- Universal Windows Platform development
- C++ CMake tools for Windows
- Download the MinGW installer from the MinGW website.
- Run the installer and select the
gcc
component.
For OSX
- Check if you have a C++ compiler installed,
Xcode
should have done it for you. To install Xcode, go to the App Store and search for Xcode and install it. Or you can install the command line tools by runningxcode-select --install
. - If not, you can install clang or gcc with homebrew
brew install gcc
Llama-CPP Troubleshooting: Mac Running Intel
When running a Mac with Intel hardware (not M1), you may run into clang: error: the clang compiler does not support ’ -march=native’ during pip install.
If so set your archflags during pip install. eg: ARCHFLAGS=“-arch x86_64” pip3 install -r requirements.txt