Skip to main content

MistralAI

This will help you getting started with Mistral chat models, accessed via their API. For detailed documentation of all ChatMistralAI features and configurations head to the API reference.

Overview

Integration details

ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
ChatMistralAIlangchain_mistralaibetaPyPI - DownloadsPyPI - Version

Model features

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs

Setup

To access Mistral models you'll need to create a Mistral account, get an API key, and install the langchain-mistralai integration package.

Credentials

A valid API key is needed to communicate with the API. Once you've obtained an API key, store it in the MISTRAL_API_KEY environment variable:

import getpass
import os

if not os.getenv("__MODULE_NAME___API_KEY"):
os.environ["__MODULE_NAME___API_KEY"] = getpass.getpass(
"Enter your __ModuleName__ API key: "
)

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")

Installation

The LangChain MistralAI integration lives in the langchain-mistralai package:

%pip install -qU langchain-mistralai

Instantiation

Now we can instantiate our model object and generate chat completions:

from langchain_mistralai.chat_models import ChatMistralAI

llm = ChatMistralAI(model="mistral-large-latest")
API Reference:ChatMistralAI

Invocation

messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
AIMessage(content="J'adore la programmation.", response_metadata={'token_usage': {'prompt_tokens': 27, 'total_tokens': 36, 'completion_tokens': 9}, 'model': 'mistral-large-latest', 'finish_reason': 'stop'}, id='run-d6196c33-9410-413b-b454-4ed0bec1f0c7-0', usage_metadata={'input_tokens': 27, 'output_tokens': 9, 'total_tokens': 36})
print(ai_msg.content)
J'adore la programmation.

Async

await llm.ainvoke(messages)
AIMessage(content="J'aime programmer.", response_metadata={'token_usage': {'prompt_tokens': 27, 'total_tokens': 34, 'completion_tokens': 7}, 'model': 'mistral-large-latest', 'finish_reason': 'stop'}, id='run-1873888a-186f-49a8-ab81-24335bd3099b-0', usage_metadata={'input_tokens': 27, 'output_tokens': 7, 'total_tokens': 34})

Streaming

for chunk in llm.stream(messages):
print(chunk.content, end="")
J'adore programmer.

Batch

llm.batch([messages])
[AIMessage(content="J'adore la programmation.", response_metadata={'token_usage': {'prompt_tokens': 27, 'total_tokens': 36, 'completion_tokens': 9}, 'model': 'mistral-large-latest', 'finish_reason': 'stop'}, id='run-2aa2a189-c405-4cf5-bd31-e9025e4c8536-0', usage_metadata={'input_tokens': 27, 'output_tokens': 9, 'total_tokens': 36})]

Chaining

You can also easily combine with a prompt template for easy structuring of user input. We can do this using LCEL

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)

chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
API Reference:ChatPromptTemplate
AIMessage(content='Ich liebe Programmieren.', response_metadata={'token_usage': {'prompt_tokens': 21, 'total_tokens': 28, 'completion_tokens': 7}, 'model': 'mistral-large-latest', 'finish_reason': 'stop'}, id='run-409ebc9a-b4a0-4734-ab6f-e11f6b4f808f-0', usage_metadata={'input_tokens': 21, 'output_tokens': 7, 'total_tokens': 28})

API reference

For detailed documentation of all ChatMistralAI features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html


Was this page helpful?


You can also leave detailed feedback on GitHub.