Given a text, the model will return a summary.
Optionally include instructions
to influence the way the summary is generated.
If use_context
is set to true
, the model will also use the content coming from the ingested
documents in the summary. The documents being used can
be filtered by their metadata using the context_filter
.
Ingested documents metadata can be found using /ingest/list
endpoint.
If you want all ingested documents to be used, remove context_filter
altogether.
If prompt
is set, it will be used as the prompt for the summarization,
otherwise the default prompt will be used.
When using 'stream': true
, the API will return data chunks following OpenAI’s
streaming model: