The /chats route enables AI-powered conversational search by integrating Large Language Models (LLMs) with your Meilisearch data.
This is an experimental feature. Use the Meilisearch Cloud UI or the experimental features endpoint to activate it:
curl \
  -X PATCH 'MEILISEARCH_URL/experimental-features/' \
  -H 'Content-Type: application/json' \
  --data-binary '{
    "chatCompletions": true
  }'
When you enable the chat route for the first time, Meilisearch creates a new API key named “Default Chat API Key”.

Authorization

When working with a secure Meilisearch instance, Use an API key with access to both the search and chatCompletions actions, such as the default chat API key. Chat queries only search indexes its API key can access. The default chat API key has access to all indexes.

Chat workspace object

{
  "uid": "WORKSPACE_NAME"
}
NameTypeDescription
uidStringUnique identifier for the chat completions workspace

List chat workspaces

GET
/chats
List all chat workspaces. Results can be paginated by using the offset and limit query parameters.

Query parameters

Query parameterDescriptionDefault value
offsetNumber of workspaces to skip0
limitNumber of workspaces to return20

Response

NameTypeDescription
resultsArrayAn array of workspaces
offsetIntegerNumber of workspaces skipped
limitIntegerNumber of workspaces returned
totalIntegerTotal number of workspaces

Example

  curl \
    -X GET 'MEILISEARCH_URL/chats?limit=3'

Response: 200 Ok

{
  "results": [
    { "uid": "WORKSPACE_1" },
    { "uid": "WORKSPACE_2" },
    { "uid": "WORKSPACE_3" }
  ],
  "offset": 0,
  "limit": 20,
  "total": 3
}

Get one chat workspace

GET
/chats/{workspace_uid}
Get information about a workshop.

Path parameters

NameTypeDescription
workspace_uid *Stringuid of the requested index

Example

  curl \
    -X GET 'MEILISEARCH_URL/chats/WORKSPACE_UID'

Response: 200 Ok

{
  "uid": "WORKSPACE_UID"
}

Chat workspace settings

Chat workspace settings object

{
  "source": "openAi",
  "orgId": null,
  "projectId": null,
  "apiVersion": null,
  "deploymentId": null,
  "baseUrl": null,
  "apiKey": "sk-abc...",
  "prompts": {
    "system": "You are a helpful assistant that answers questions based on the provided context."
  }
}

The prompts object

NameTypeDescription
systemStringA prompt added to the start of the conversation to guide the LLM
searchDescriptionStringA prompt to explain what the internal search function does
searchQParamStringA prompt to explain what the q parameter of the search function does and how to use it
searchIndexUidParamStringA prompt to explain what the indexUid parameter of the search function does and how to use it

Get chat workspace settings

GET
/chats/{workspace_uid}/settings
Retrieve the current settings for a chat workspace.

Path parameters

NameTypeDescription
workspace_uidStringThe workspace identifier

Response: 200 OK

Returns the settings object without the apiKey field.
{
  "source": "openAi",
  "prompts": {
    "system": "You are a helpful assistant."
  }
}

Example

curl \
  -X GET 'http://localhost:7700/chats/WORKSPACE_UID/settings' \
  -H 'Authorization: Bearer MASTER_KEY'

Update chat workspace settings

PATCH
/chats/{workspace_uid}/settings
Configure the LLM provider and settings for a chat workspace. If the workspace does not exist, querying this endpoint will create it.

Path parameters

NameTypeDescription
workspace_uidStringThe workspace identifier

Settings parameters

NameTypeDescription
sourceStringLLM source: "openAi", "azureOpenAi", "mistral", "gemini", or "vLlm"
orgIdStringOrganization ID for the LLM provider (required for azureOpenAi)
projectIdStringProject ID for the LLM provider
apiVersionStringAPI version for the LLM provider (required for azureOpenAi)
deploymentIdStringDeployment ID for the LLM provider (required for azureOpenAi)
baseUrlStringBase URL for the provider (required for azureOpenAi and vLlm)
apiKeyStringAPI key for the LLM provider (optional for vLlm)
promptsObjectPrompts object containing system prompts and other configuration

Request body

{
  "source": "openAi",
  "apiKey": "OPEN_AI_SECURITY_KEY",
  "prompts": {
    "system": "DEFAULT CHAT INSTRUCTIONS"
  }
}
All fields are optional. Only provided fields will be updated.

Response: 200 OK

Returns the updated settings object. apiKey is write-only and will not be returned in the response.

Examples

curl \
  -X PATCH 'http://localhost:7700/chats/customer-support/settings' \
  -H 'Authorization: Bearer MASTER_KEY' \
  -H 'Content-Type: application/json' \
  --data-binary '{
    "source": "openAi",
    "apiKey": "sk-abc...",
    "prompts": {
      "system": "You are a helpful customer support assistant."
    }
  }'

Reset chat workspace settings

DELETE
/chats/{workspace_uid}/settings
Reset a chat workspace’s settings to its default values.

Path parameters

NameTypeDescription
workspace_uidStringThe workspace identifier

Response: 200 OK

Returns the settings object without the apiKey field.
{
  "source": "openAi",
  "prompts": {
    "system": "You are a helpful assistant."
  }
}

Example

curl \
  -X DELETE 'http://localhost:7700/chats/customer-support/settings' \
  -H 'Authorization: Bearer MASTER_KEY'

Chat completions

POST
/chats/{workspace_uid}/chat/completions
Create a chat completion using Meilisearch’s OpenAI-compatible interface. The endpoint searches relevant indexes and generates responses based on the retrieved content.

Path parameters

NameTypeDescription
workspaceStringThe chat completion workspace unique identifier (uid)

Request body

{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "user",
      "content": "What are the main features of Meilisearch?"
    }
  ],
  "stream": true
}
NameTypeRequiredDescription
modelStringYesModel to use and will be related to the source LLM in the workspace settings
messagesArrayYesArray of message objects with role and content
streamBooleanNoEnable streaming responses (default: true)
Currently, only streaming responses (stream: true) are supported.

Message object

NameTypeDescription
roleStringMessage role: "system", "user", or "assistant"
contentStringMessage content

Response

The response follows the OpenAI chat completions format. For streaming responses, the endpoint returns Server-Sent Events (SSE).

Streaming response example

data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"gpt-3.5-turbo","choices":[{"index":0,"delta":{"content":"Meilisearch"},"finish_reason":null}]}

data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"gpt-3.5-turbo","choices":[{"index":0,"delta":{"content":" is"},"finish_reason":null}]}

data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"gpt-3.5-turbo","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}

data: [DONE]

Example

curl -N \
  -X POST 'http://localhost:7700/chats/customer-support/chat/completions' \
  -H 'Authorization: Bearer DEFAULT_CHAT_KEY' \
  -H 'Content-Type: application/json' \
  --data-binary '{
    "model": "gpt-3.5-turbo",
    "messages": [
      {
        "role": "user",
        "content": "What is Meilisearch?"
      }
    ],
    "stream": true
  }'