
Large Language Models (LLMs) are fundamentally trained on a conversational pattern of human queries followed by assistant responses. When building multi-agent systems, we need to account for this expectation by converting agent-to-agent communications into a format that mimics human messages[1][2].
For example, if Agent A needs to communicate with Agent B, we need to transform Agent A's output into a "human message" format that Agent B can properly process. This conversion is typically handled by a wrapper function that reformats the sending agent's output, ensuring that the receiving agent's context window maintains the expected Human->Assistant flow[2].
Diagram 1: Message Conversion Flow

Shared State Model, 2+ Agents
When multiple agents operate within the same system (like a single server), they can share a common conversation history or state[3][7]. This shared state model simplifies agent interactions and ensures consistency across the system, particularly when implementing locking mechanisms to handle simultaneous updates.
Two-Agent Scenario
In the simplest case, two agents can maintain a single shared conversation history. Each agent can read and update the full context of previous interactions, enabling more informed and contextually appropriate responses[3].
N-Agent Scenario
When multiple agents operate within the same system, they should maintain a shared state model to ensure consistency and efficient communication[7]. There are several key patterns for implementing shared state in N-agent conversations:
- Shared Message List Pattern
- Agents communicate through a common state channel, typically a list of messages[1]
- All agents have access to the same conversation context
- Two options for message sharing:
- Share full history: Agents share their complete thought process ("scratchpad")[1]
- Share final results: Agents maintain private scratchpads and only share final outputs[1]
- Event Sourcing Pattern
- Every state change is logged as a sequence of immutable events[7]
- Events are stored in an append-only log that serves as the single source of truth
- Allows replaying of events to reconstruct system state at any point
- Particularly useful for auditing and debugging agent interactions
- Centralized State Pattern
- A central state manager maintains the shared context
- All agents read from and write to this common state
- Requires synchronization mechanisms to handle concurrent access
- Simplifies state management compared to distributed approaches
To maintain consistency when multiple agents access shared state:
- Implement locking mechanisms to handle simultaneous updates
- Use atomic operations for state modifications
- Consider using a state manager to coordinate access and updates
Diagram Two: Shared State Architecture

Separate State Model, 2+ Agents
In distributed systems, where agents may be running on different servers or in different locations, each agent needs to maintain its own version of the conversation history[6][8]. This model requires efficient serialization and deserialization of conversation states.
Two-Agent Scenario
With two distributed agents, each maintains its own conversation history and must explicitly share relevant context with the other agent when communicating[6]. This requires careful message packaging to ensure sufficient context is included with each interaction.
N-Agent Scenario
The complexity increases significantly with multiple distributed agents. Each agent must track its conversations with multiple peers and maintain separate conversation histories for each relationship. There are several key patterns for handling N-agent conversations:
- Broadcast Communication Pattern
- Agents can use a tool/function sendToAll() to broadcast messages to all other agents in the environment
- Each agent maintains its own copy of the shared conversation context
- Good for scenarios where all agents need to be aware of all communications
- If not properly handled, multiple agents can attempt to update the state simultaneously causing the conversation to diverge and splinter into multiple concurrent states
- Hierarchical Communication Pattern
- Agents are organized in a tree-like structure to efficiently propagate updates[1]
- Parent agents aggregate and distribute messages to child agents
- Reduces overall message volume compared to broadcast pattern
- If the whole conversation history is not passed between agents then agents will not have the full conversation history and will make decisions without seeing the full context
- Group Conversation Pattern
- Agents maintain a shared conversation context for group discussions
- Messages are delivered to all participants using deliverToAllAgentsInside()
- Requires synchronization mechanisms to maintain consistency
To maintain consistency across distributed agents, implementations can use either:
- A gossip protocol where agents periodically synchronize their conversation histories[8]
- A distributed consensus algorithm to agree on conversation state[8]
- A central coordinator that sequences and broadcasts messages to all participants
Diagram Three: Distributed State Architecture

The choice between shared and separate state models depends on your specific use case, system architecture, and scalability requirements. The shared state model offers simplicity and consistency but may not scale as well in distributed environments. The separate state model provides better distribution capabilities but requires more complex state management and context sharing mechanisms.
Sources:
- https://www.jasss.org/27/2/2.html
- https://www.reddit.com/r/LangChain/comments/1dpqtfw/sharing_history_between_independent_agents/
- https://stackoverflow.com/questions/39690022/can-i-have-a-shared-model-between-2-redux-states-reducers
- https://www.sciencedirect.com/science/article/abs/pii/S0950705122010097
- https://forum.holochain.org/t/questions-on-shared-state-between-agents-as-well-as-validation-of-transactions/2388
- https://algorithmsbook.com/files/chapter-27.pdf
- https://www.linkedin.com/pulse/multi-agent-systems-shared-persistent-state-rajib-deb-oteqc
- https://arxiv.org/abs/2410.15137
This article is licensed under CC BY-SA 4.0.