Quickstart
This guide provides a quick start for running different profiles of PrivateGPT using Docker Compose. The profiles cater to various environments, including Ollama setups (CPU, CUDA, MacOS), and a fully local setup.
By default, Docker Compose will download pre-built images from a remote registry when starting the services. However, you have the option to build the images locally if needed. Details on building Docker image locally are provided at the end of this guide.
If you want to run PrivateGPT locally without Docker, refer to the Local Installation Guide.
Prerequisites
- Docker and Docker Compose: Ensure both are installed on your system. Installation Guide for Docker, Installation Guide for Docker Compose.
- Clone PrivateGPT Repository: Clone the PrivateGPT repository to your machine and navigate to the directory:
Setups
Ollama Setups (Recommended)
1. Default/Ollama CPU
Description: This profile runs the Ollama service using CPU resources. It is the standard configuration for running Ollama-based Private-GPT services without GPU acceleration.
Run: To start the services using pre-built images, run:
or with a specific profile:
2. Ollama Nvidia CUDA
Description: This profile leverages GPU acceleration with CUDA support, suitable for computationally intensive tasks that benefit from GPU resources.
Requirements: Ensure that your system has compatible GPU hardware and the necessary NVIDIA drivers installed. The installation process is detailed here.
Run: To start the services with CUDA support using pre-built images, run:
3. Ollama External API
Description: This profile is designed for running PrivateGPT using Ollama installed on the host machine. This setup is particularly useful for MacOS users, as Docker does not yet support Metal GPU.
Requirements: Install Ollama on your machine by following the instructions at ollama.ai.
Run: To start the Ollama service, use:
To start the services with the host configuration using pre-built images, run:
Fully Local Setups
1. LlamaCPP CPU
Description:
This profile runs the Private-GPT services locally using llama-cpp
and Hugging Face models.
Requirements: A Hugging Face Token (HF_TOKEN) is required for accessing Hugging Face models. Obtain your token following this guide.
Run: Start the services with your Hugging Face token using pre-built images:
Replace <your_hf_token>
with your actual Hugging Face token.
Building Locally
If you prefer to build Docker images locally, which is useful when making changes to the codebase or the Dockerfiles, follow these steps:
Building Locally
To build the Docker images locally, navigate to the cloned repository directory and run:
This command compiles the necessary Docker images based on the current codebase and Dockerfile configurations.
Forcing a Rebuild with —build
If you have made changes and need to ensure these changes are reflected in the Docker images, you can force a rebuild before starting the services:
or with a specific profile:
Replace <profile_name>
with the desired profile.