Core components
Messages
Messages are the fundamental unit of context for models in LangChain. They represent the input and output of models, carrying both the content and metadata needed to represent the state of a conversation when interacting with an LLM.
Messages are objects that contain:
- Role - Identifies the message type (e.g.
system,user) - Content - Represents the actual content of the message (like text, images, audio, documents, etc.)
- Metadata - Optional fields such as response information, message IDs, and token usage
LangChain provides a standard message type that works across all model providers, ensuring consistent behavior regardless of the model being called.
Basic usage
The simplest way to use messages is to create message objects and pass them to a model when invoking.
from langchain.chat_models import init_chat_model
from langchain.messages import HumanMessage, AIMessage, SystemMessage
model = init_chat_model("gpt-5-nano")
system_msg = SystemMessage("You are a helpful assistant.")
human_msg = HumanMessage("Hello, how are you?")
# Use with chat models
messages = [system_msg, human_msg]
response = model.invoke(messages) # Returns AIMessageText prompts
Text prompts are strings - ideal for straightforward generation tasks where you don't need to retain conversation history.
response = model.invoke("Write a haiku about spring")Use text prompts when:
- You have a single, standalone request
- You don't need conversation history
- You want minimal code complexity
Message prompts
Alternatively, you can pass in a list of messages to the model by providing a list of message objects.
from langchain.messages import SystemMessage, HumanMessage, AIMessage
messages = [
SystemMessage("You are a poetry expert"),
HumanMessage("Write a haiku about spring"),
AIMessage("Cherry blossoms bloom...")
]
response = model.invoke(messages)Use message prompts when:
- Managing multi-turn conversations
- Working with multimodal content (images, audio, files)
- Including system instructions
Dictionary format
You can also specify messages directly in OpenAI chat completions format.
messages = [
{"role": "system", "content": "You are a poetry expert"},
{"role": "user", "content": "Write a haiku about spring"},
{"role": "assistant", "content": "Cherry blossoms bloom..."}
]
response = model.invoke(messages)Message types
- System message - Tells the model how to behave and provide context for interactions
- Human message - Represents user input and interactions with the model
- AI message - Responses generated by the model, including text content, tool calls, and metadata
- Tool message - Represents the outputs of tool calls
System Message
A SystemMessage represent an initial set of instructions that primes the model's behavior. You can use a system message to set the tone, define the model's role, and establish guidelines for responses.
system_msg = SystemMessage("You are a helpful coding assistant.")
messages = [
system_msg,
HumanMessage("How do I create a REST API?")
]
response = model.invoke(messages)from langchain.messages import SystemMessage, HumanMessage
system_msg = SystemMessage("""
You are a senior Python developer with expertise in web frameworks.
Always provide code examples and explain your reasoning.
Be concise but thorough in your explanations.
""")
messages = [
system_msg,
HumanMessage("How do I create a REST API?")
]
response = model.invoke(messages)Human Message
A HumanMessage represents user input and interactions. They can contain text, images, audio, files, and any other amount of multimodal content.
Text content
response = model.invoke([
HumanMessage("What is machine learning?")
])# Using a string is a shortcut for a single HumanMessage
response = model.invoke("What is machine learning?")Message metadata
human_msg = HumanMessage(
content="Hello!",
name="alice", # Optional: identify different users
id="msg_123", # Optional: unique identifier for tracing
)The name field behavior varies by provider - some use it for user identification, others ignore it. To check, refer to the model provider's reference.
AI Message
An AIMessage represents the output of a model invocation. They can include multimodal data, tool calls, and provider-specific metadata that you can later access.
response = model.invoke("Explain AI")
print(type(response)) # <class 'langchain_core.messages.AIMessage'>AIMessage objects are returned by the model when calling it, which contains all of the associated metadata in the response.
Providers weigh/contextualize types of messages differently, which means it is sometimes helpful to manually create a new AIMessage object and insert it into the message history as if it came from the model.
from langchain.messages import AIMessage, SystemMessage, HumanMessage
# Create an AI message manually (e.g., for conversation history)
ai_msg = AIMessage("I'd be happy to help you with that question!")
# Add to conversation history
messages = [
SystemMessage("You are a helpful assistant"),
HumanMessage("Can you help me?"),
ai_msg, # Insert as if it came from the model
HumanMessage("Great! What's 2+2?")
]
response = model.invoke(messages)Attributes
| name | type | desc |
|---|---|---|
| text | string | The text content of the message. |
| content | string | dict[] | The raw content of the message. |
| content_blocks | ContentBlock[] | The standardized content blocks of the message. |
| tool_calls | dict[] | None | The tool calls made by the model. Empty if no tools are called. |
| id | string | A unique identifier for the message (either automatically generated by LangChain or returned in the provider response) |
| usage_metadata | dict | None | The usage metadata of the message, which can contain token counts when available. |
| response_metadata | ResponseMetadata | None | The response metadata of the message. |
Tool calls
When models make tool calls, they're included in the AIMessage:
from langchain.chat_models import init_chat_model
model = init_chat_model("gpt-5-nano")
def get_weather(location: str) -> str:
"""Get the weather at a location."""
...
model_with_tools = model.bind_tools([get_weather])
response = model_with_tools.invoke("What's the weather in Paris?")
for tool_call in response.tool_calls:
print(f"Tool: {tool_call['name']}")
print(f"Args: {tool_call['args']}")
print(f"ID: {tool_call['id']}")Other structured data, such as reasoning or citations, can also appear in message content.
Token usage
An AIMessage can hold token counts and other usage metadata in its usage_metadata field:
from langchain.chat_models import init_chat_model
model = init_chat_model("gpt-5-nano")
response = model.invoke("Hello!")
response.usage_metadata{'input_tokens': 8,
'output_tokens': 304,
'total_tokens': 312,
'input_token_details': {'audio': 0, 'cache_read': 0},
'output_token_details': {'audio': 0, 'reasoning': 256}}See UsageMetadata for details.
Streaming and chunks
During streaming, you'll receive AIMessageChunk objects that can be combined into a full message object:
chunks = []
full_message = None
for chunk in model.stream("Hi"):
chunks.append(chunk)
print(chunk.text)
full_message = chunk if full_message is None else full_message + chunkTool Message
For models that support tool calling, AI messages can contain tool calls. Tool messages are used to pass the results of a single tool execution back to the model.
Tools can generate ToolMessage objects directly. Below, we show a simple example. Read more in the tools guide.
# After a model makes a tool call
ai_message = AIMessage(
content=[],
tool_calls=[{
"name": "get_weather",
"args": {"location": "San Francisco"},
"id": "call_123"
}]
)
# Execute tool and create result message
weather_result = "Sunny, 72°F"
tool_message = ToolMessage(
content=weather_result,
tool_call_id="call_123" # Must match the call ID
)
# Continue conversation
messages = [
HumanMessage("What's the weather in San Francisco?"),
ai_message, # Model's tool call
tool_message, # Tool execution result
]
response = model.invoke(messages) # Model processes the resultAttributes
| name | type | desc |
|---|---|---|
| content | string | The stringified output of the tool call. |
| tool_call_id | string | The ID of the tool call that this message is responding to. (this must match the ID of the tool call in the AIMessage) |
| name | string | The name of the tool that was called. |
| artifact | dict | Additional data not sent to the model but can be accessed programmatically. |
TIP
The artifact field stores supplementary data that won't be sent to the model but can be accessed programmatically. This is useful for storing raw results, debugging information, or data for downstream processing without cluttering the model's context.
Example: Using artifact for retrieval metadata
For example, a retrieval tool could retrieve a passage from a document for reference by a model. Where message content contains text that the model will reference, an artifact can contain document identifiers or other metadata that an application can use (e.g., to render a page). See example below:
from langchain.messages import ToolMessage
# Sent to model
message_content = "It was the best of times, it was the worst of times."
# Artifact available downstream
artifact = {"document_id": "doc_123", "page": 0}
tool_message = ToolMessage(
content=message_content,
tool_call_id="call_123",
name="search_books",
artifact=artifact,
)See the RAG tutorial for an end-to-end example of building retrieval agents with LangChain.
Message content
You can think of a message's content as the payload of data that gets sent to the model. Messages have a content attribute that is loosely-typed, supporting strings and lists of untyped objects (e.g., dictionaries). This allows support for provider-native structures directly in LangChain chat models, such as multimodal content and other data.
Separately, LangChain provides dedicated content types for text, reasoning, citations, multi-modal data, server-side tool calls, and other message content. See content blocks below.
LangChain chat models accept message content in the content attribute, and can contain:
- A string
- A list of content blocks in a provider-native format
- A list of LangChain's standard content blocks
See below for an example using multimodal inputs:
from langchain.messages import HumanMessage
# String content
human_message = HumanMessage("Hello, how are you?")
# Provider-native format (e.g., OpenAI)
human_message = HumanMessage(content=[
{"type": "text", "text": "Hello, how are you?"},
{"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}}
])
# List of standard content blocks
human_message = HumanMessage(content_blocks=[
{"type": "text", "text": "Hello, how are you?"},
{"type": "image", "url": "https://example.com/image.jpg"},
])TIP
Specifying content_blocks when initializing a message will still populate message content, but provides a type-safe interface for doing so.
Standard content blocks
LangChain provides a standard representation for message content that works across providers.
Message objects implement a content_blocks property that will lazily parse the content attribute into a standard, type-safe representation. For example, messages generated from ChatAnthropic or ChatOpenAI will include thinking or reasoning blocks in the format of the respective provider, but can be lazily parsed into a consistent ReasoningContentBlock representation:
from langchain.messages import AIMessage
message = AIMessage(
content=[
{"type": "thinking", "thinking": "...", "signature": "WaUjzkyp..."},
{"type": "text", "text": "..."},
],
response_metadata={"model_provider": "anthropic"}
)
message.content_blocks
# result
[
{'type': 'reasoning','reasoning': '...','extras': {'signature': 'WaUjzkyp...'}},
{'type': 'text', 'text': '...'}
]from langchain.messages import AIMessage
message = AIMessage(
content=[
{
"type": "reasoning",
"id": "rs_abc123",
"summary": [
{"type": "summary_text", "text": "summary 1"},
{"type": "summary_text", "text": "summary 2"},
],
},
{"type": "text", "text": "...", "id": "msg_abc123"},
],
response_metadata={"model_provider": "openai"}
)
message.content_blocks
# result
[
{'type': 'reasoning', 'id': 'rs_abc123', 'reasoning': 'summary 1'},
{'type': 'reasoning', 'id': 'rs_abc123', 'reasoning': 'summary 2'},
{'type': 'text', 'text': '...', 'id': 'msg_abc123'}
]See the integrations guides to get started with the inference provider of your choice.
TIP
Serializing standard content
If an application outside of LangChain needs access to the standard content block representation, you can opt-in to storing content blocks in message content.
To do this, you can set the LC_OUTPUT_VERSION environment variable to v1. Or,initialize any chat model with output_version="v1":
from langchain.chat_models import init_chat_model
model = init_chat_model("gpt-5-nano", output_version="v1")Multimodal
Multimodality refers to the ability to work with data that comes in different forms, such as text, audio, images, and video. LangChain includes standard types for these data that can be used across providers.
Chat models can accept multimodal data as input and generate it as output. Below we show short examples of input messages featuring multimodal data.
Extra keys can be included top-level in the content block or nested in "extras": {"key": value}.
OpenAI and AWS Bedrock Converse, for example, require a filename for PDFs. See the provider page for your chosen model for specifics.
# From URL
message = {
"role": "user",
"content": [
{"type": "text", "text": "Describe the content of this image."},
{"type": "image", "url": "https://example.com/path/to/image.jpg"},
]
}
# From base64 data
message = {
"role": "user",
"content": [
{"type": "text", "text": "Describe the content of this image."},
{
"type": "image",
"base64": "AAAAIGZ0eXBtcDQyAAAAAGlzb21tcDQyAAACAGlzb2...",
"mime_type": "image/jpeg",
},
]
}
# From provider-managed File ID
message = {
"role": "user",
"content": [
{"type": "text", "text": "Describe the content of this image."},
{"type": "image", "file_id": "file-abc123"},
]
}# From URL
message = {
"role": "user",
"content": [
{"type": "text", "text": "Describe the content of this document."},
{"type": "file", "url": "https://example.com/path/to/document.pdf"},
]
}
# From base64 data
message = {
"role": "user",
"content": [
{"type": "text", "text": "Describe the content of this document."},
{
"type": "file",
"base64": "AAAAIGZ0eXBtcDQyAAAAAGlzb21tcDQyAAACAGlzb2...",
"mime_type": "application/pdf",
},
]
}
# From provider-managed File ID
message = {
"role": "user",
"content": [
{"type": "text", "text": "Describe the content of this document."},
{"type": "file", "file_id": "file-abc123"},
]
}# From base64 data
message = {
"role": "user",
"content": [
{"type": "text", "text": "Describe the content of this audio."},
{
"type": "audio",
"base64": "AAAAIGZ0eXBtcDQyAAAAAGlzb21tcDQyAAACAGlzb2...",
"mime_type": "audio/wav",
},
]
}
# From provider-managed File ID
message = {
"role": "user",
"content": [
{"type": "text", "text": "Describe the content of this audio."},
{"type": "audio", "file_id": "file-abc123"},
]
}# From base64 data
message = {
"role": "user",
"content": [
{"type": "text", "text": "Describe the content of this video."},
{
"type": "video",
"base64": "AAAAIGZ0eXBtcDQyAAAAAGlzb21tcDQyAAACAGlzb2...",
"mime_type": "video/mp4",
},
]
}
# From provider-managed File ID
message = {
"role": "user",
"content": [
{"type": "text", "text": "Describe the content of this video."},
{"type": "video", "file_id": "file-abc123"},
]
}WARNING
Not all models support all file types. Check the model provider's reference for supported formats and size limits.
Content block reference
Content blocks are represented (either when creating a message or accessing the content_blocks property) as a list of typed dictionaries. Each item in the list must adhere to one of the following block types:
Core
TextContentBlock
Purpose: Standard text output
| name | type | desc |
|---|---|---|
| type | string(required) | Always "text" |
| text | string(required) | The text content |
| annotations | object[] | List of annotations for the text |
| extras | object | Additional provider-specific data |
Example:
{
"type": "text",
"text": "Hello world",
"annotations": []
}ReasoningContentBlock
Purpose: Model reasoning steps
| name | type | desc |
|---|---|---|
| type | string(required) | Always "reasoning" |
| reasoning | string | The reasoning content |
| extras | object | Additional provider-specific data |
Example:
{
"type": "reasoning",
"reasoning": "The user is asking about...",
"extras": {"signature": "abc123"},
}Multimodal
ImageContentBlock
Purpose: Image data
| name | type | desc |
|---|---|---|
| type | string(required) | Always "image" |
| url | string | URL pointing to the image location. |
| base64 | string | Base64-encoded image data. |
| id | string | Reference ID to an externally stored image (e.g., in a provider's file system or in a bucket). |
| mime_type | string | Image MIME type (e.g., image/jpeg, image/png) |
AudioContentBlock
Purpose: Audio data
| name | type | desc |
|---|---|---|
| type | string(required) | Always "audio" |
| url | string | URL pointing to the audio location. |
| base64 | string | Base64-encoded audio data. |
| id | string | Reference ID to an externally stored audio (e.g., in a provider's file system or in a bucket). |
| mime_type | string | Audio MIME type (e.g., audio/mpeg, audio/wav) |
VideoContentBlock
Purpose: Video data
| name | type | desc |
|---|---|---|
| type | string(required) | Always "video" |
| url | string | URL pointing to the video location. |
| base64 | string | Base64-encoded video data. |
| id | string | Reference ID to an externally stored video file(e.g., in a provider's file system or in a bucket). |
| mime_type | string | Video MIME type (e.g., video/mp4, video/webm) |
FileContentBlock
Purpose: Generic files (PDF, etc)
| name | type | desc |
|---|---|---|
| type | string(required) | Always "file" |
| url | string | URL pointing to the file location. |
| base64 | string | Base64-encoded file data. |
| id | string | Reference ID to an externally stored file (e.g., in a provider's file system or in a bucket). |
| mime_type | string | File MIME type (e.g., application/pdf) |
PlainTextContentBlock
Purpose: Document text (.txt, .md)
| name | type | desc |
|---|---|---|
| type | string(required) | Always "text-plain" |
| text | string | The text content |
| base64 | string | Base64-encoded file data. |
| mime_type | string | MIME type of the text (e.g., text/plain, text/markdown) |
Tool Calling
ToolCall
Purpose: Function calls
| name | type | desc |
|---|---|---|
| type | string(required) | Always "tool_call" |
| name | string(required) | Name of the tool to call |
| args | object(required) | Arguments to pass to the tool |
| id | string | Unique identifier for this tool call |
Example:
{
"type": "tool_call",
"name": "search",
"args": {"query": "weather"},
"id": "call_123"
}ToolCallChunk
Purpose: Streaming tool call fragments
| name | type | desc |
|---|---|---|
| type | string(required) | Always "tool_call_chunk" |
| name | string(required) | Name of the tool being called |
| args | object(required) | Partial tool arguments (may be incomplete JSON) |
| id | string | Tool call identifier |
| index | number|string | Position of this chunk in the stream |
InvalidToolCall
Purpose: Malformed calls, intended to catch JSON parsing errors.
| name | type | desc |
|---|---|---|
| type | string(required) | Always "invalid_tool_call" |
| name | string(required) | Name of the tool that failed to be called |
| args | object(required) | Arguments to pass to the tool |
| error | string | Description of what went wrong |
Server-Side Tool Execution
ServerToolCall
Purpose: Tool call that is executed server-side.
| name | type | desc |
|---|---|---|
| type | string(required) | Always "server_tool_call" |
| id | string(required) | An identifier associated with the tool call. |
| name | string(required) | The name of the tool to be called. |
| args | string(required) | Partial tool arguments (may be incomplete JSON) |
ServerToolCallChunk
Purpose: Streaming server-side tool call fragments
| name | type | desc |
|---|---|---|
| type | string(required) | Always "server_tool_call_chunk" |
| id | string(required) | An identifier associated with the tool call. |
| name | string(required) | The name of the tool to be called. |
| args | string(required) | Partial tool arguments (may be incomplete JSON) |
| index | number|string | Position of this chunk in the stream |
ServerToolResult
Purpose: Search results
| name | type | desc |
|---|---|---|
| type | string(required) | Always "server_tool_result" |
| id | string | Identifier associated with the server tool result. |
| tool_call_id | string(required) | Identifier of the corresponding server tool call. |
| status | string(required) | Execution status of the server-side tool. "success" or "error". |
| output | Output of the executed tool. |
Provider-Specific Blocks
NonStandardContentBlock
Purpose: Provider-specific escape hatch
| name | type | desc |
|---|---|---|
| type | string | Always "non_standard" |
| value | object | Provider-specific data structure |
Usage: For experimental or provider-unique features
Additional provider-specific content types may be found within the reference documentation of each model provider.
TIP
View the canonical type definitions in the API reference.
INFO
Content blocks were introduced as a new property on messages in LangChain v1 to standardize content formats across providers while maintaining backward compatibility with existing code. Content blocks are not a replacement for the content property, but rather a new property that can be used to access the content of a message in a standardized format.
Use with chat models
Chat models accept a sequence of message objects as input and return an AIMessage as output. Interactions are often stateless, so that a simple conversational loop involves invoking a model with a growing list of messages.
Refer to the below guides to learn more:
- Built-in features for persisting and managing conversation histories
- Strategies for managing context windows, including trimming and summarizing messages

