LLM Backends
Running the Server
PrivateGPT supports running with different LLMs & setups.
Local models
Both the LLM and the Embeddings model will run locally.
Make sure you have followed the Local LLM requirements section before moving on.
This command will start PrivateGPT using the settings.yaml
(default profile) together with the settings-local.yaml
configuration files. By default, it will enable both the API and the Gradio UI. Run:
or
When the server is started it will print a log Application startup complete. Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API using Swagger UI.
Customizing low level parameters
Currently, not all the parameters of llama.cpp
and llama-cpp-python
are available at PrivateGPT’s settings.yaml
file.
In case you need to customize parameters such as the number of layers loaded into the GPU, you might change
these at the llm_component.py
file under the private_gpt/components/llm/llm_component.py
.
Available LLM config options
The llm
section of the settings allows for the following configurations:
mode
: how to run your llmmax_new_tokens
: this lets you configure the number of new tokens the LLM will generate and add to the context window (by default Llama.cpp uses256
)
Example:
If you are getting an out of memory error, you might also try a smaller model or stick to the proposed recommended models, instead of custom tuning the parameters.
Using OpenAI
If you cannot run a local model (because you don’t have a GPU, for example) or for testing purposes, you may decide to run PrivateGPT using OpenAI as the LLM and Embeddings model.
In order to do so, create a profile settings-openai.yaml
with the following contents:
And run PrivateGPT loading that profile you just created:
PGPT_PROFILES=openai make run
or
PGPT_PROFILES=openai poetry run python -m private_gpt
When the server is started it will print a log Application startup complete. Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API. You’ll notice the speed and quality of response is higher, given you are using OpenAI’s servers for the heavy computations.
Using OpenAI compatible API
Many tools, including LocalAI and vLLM,
support serving local models with an OpenAI compatible API. Even when overriding the api_base
,
using the openai
mode doesn’t allow you to use custom models. Instead, you should use the openailike
mode:
This mode uses the same settings as the openai
mode.
As an example, you can follow the vLLM quickstart guide
to run an OpenAI compatible server. Then, you can run PrivateGPT using the settings-vllm.yaml
profile:
PGPT_PROFILES=vllm make run
Using Azure OpenAI
If you cannot run a local model (because you don’t have a GPU, for example) or for testing purposes, you may decide to run PrivateGPT using Azure OpenAI as the LLM and Embeddings model.
In order to do so, create a profile settings-azopenai.yaml
with the following contents:
And run PrivateGPT loading that profile you just created:
PGPT_PROFILES=azopenai make run
or
PGPT_PROFILES=azopenai poetry run python -m private_gpt
When the server is started it will print a log Application startup complete. Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API. You’ll notice the speed and quality of response is higher, given you are using Azure OpenAI’s servers for the heavy computations.
Using AWS Sagemaker
For a fully private & performant setup, you can choose to have both your LLM and Embeddings model deployed using Sagemaker.
Note: how to deploy models on Sagemaker is out of the scope of this documentation.
In order to do so, create a profile settings-sagemaker.yaml
with the following contents (remember to
update the values of the llm_endpoint_name and embedding_endpoint_name to yours):
And run PrivateGPT loading that profile you just created:
PGPT_PROFILES=sagemaker make run
or
PGPT_PROFILES=sagemaker poetry run python -m private_gpt
When the server is started it will print a log Application startup complete. Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API.
Using Ollama
Another option for a fully private setup is using Ollama.
Note: how to deploy Ollama and pull models onto it is out of the scope of this documentation.
In order to do so, create a profile settings-ollama.yaml
with the following contents:
And run PrivateGPT loading that profile you just created:
PGPT_PROFILES=ollama make run
or
PGPT_PROFILES=ollama poetry run python -m private_gpt
When the server is started it will print a log Application startup complete. Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API.
Using IPEX-LLM
For a fully private setup on Intel GPUs (such as a local PC with an iGPU, or discrete GPUs like Arc, Flex, and Max), you can use IPEX-LLM.
To deploy Ollama and pull models using IPEX-LLM, please refer to this guide. Then, follow the same steps outlined in the Using Ollama section to create a settings-ollama.yaml
profile and run the private-GPT server.
Using Gemini
If you cannot run a local model (because you don’t have a GPU, for example) or for testing purposes, you may decide to run PrivateGPT using Gemini as the LLM and Embeddings model. In addition, you will benefit from multimodal inputs, such as text and images, in a very large contextual window.
In order to do so, create a profile settings-gemini.yaml
with the following contents:
And run PrivateGPT loading that profile you just created:
PGPT_PROFILES=gemini make run
or
PGPT_PROFILES=gemini poetry run python -m private_gpt
When the server is started it will print a log Application startup complete. Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API.