Strands Agent Framework-Implementing Multi-Agent Collaboration [Strands, Langraph and Bedrock] as Tools
Strands Agents is an open-source SDK developed by AWS, designed to streamline the creation and deployment of AI agents. By adopting a model-driven approach, it enables developers to build agents with minimal code, leveraging advanced language models for planning, tool usage, and reflection. For more information on Strands agents framework, refer to this documentation —https://strandsagents.com/ . To get started on Strands Agents, refer to this github Repo. This Repo provides a step by step guide on learning Strands Agents from using pre-built tools, custom tools, MCP servers — https://github.com/thandavms/strands_agents
The focus of this article is to build a multi agent application using diverse agents framework — Strands, LangGraph and Amazon Bedrock.
Flow:
We will have a Strands agent that will act as a Supervisor with 2 tools. One tool will be a Bedrock Agent, an expert in Agentic AI memory and another tool will be a LangGraph Agent, an general intelligence expert who will search the internet and answer user queries.
Pre-requisites:
- Create a Bedrock Agent: You will need to have a bedrock agent created that will act as tool for your Strands Agent. For detils on Bedrock Agents, refer to this blog — For information on how agents work, refer to this blog — https://medium.com/@meenakshisundaram-t/decoding-amazon-bedrock-agents-375c6167accc. For the purpose of this article, you can follow this repo to create a Bedrock Agent — https://github.com/thandavms/langgraph_bedrock/tree/main/1.%20bedrock_resources/4.%20agent. This repo will create a Bedrock Agent [Agentic Memory] that has a knowledge base with articles on Agentic AI memory
- Tavily Tool: LangGraph Agent will search the internet using Tavily. You will need a Tavily API key. Create an API key from here — https://tavily.com/
- .env File: Create a .env file like the one below and add the required details
TAVILY_API_KEY=tvly**********
AGENT_ID=
AGENT_ALIAS=
AWS_REGION_NAME=us-west-2
Set up primitives for Strands Agent:
Let us start building our Strands Agent. Strands agents come with pre-built primitives (pre-defined model, System Prompt / instructions, Conversation memory). However you can override them like the below. It is always a best practice to do so..
from strands import tool
from strands import Agent
## 1. Model
from strands.models import BedrockModel
bedrock_model = BedrockModel(model_id="us.amazon.nova-pro-v1:0")
## 2. System Prompt / Instructions
system_prompt = f"You are a helpful assistant that provides concise answers. "
## 3. Conversation Manager
from strands.agent.conversation_manager import SlidingWindowConversationManager
conv_manager = SlidingWindowConversationManager(window_size=10)
Define Tools for the Agent:
Let us now create the tools for the Strands Agent.
Too1 1 — LangGraph Agent for answering general intelligence questions:
Below is a simple LangGraph graph that has 2 nodes. One searches the internet for information and the other one summarizes the information.
@tool
def general_intelligence(query):
"""
Answer generic queries by searching the internet
Args:
query: User query on the topic of interest
Returns:
summary of the content
"""
print("CALLING WEBSEARCH")
load_dotenv()
tavily_api_key = os.getenv('TAVILY_API_KEY')
model_id = "us.amazon.nova-pro-v1:0"
## Model - Agent Brain
from langchain_aws import ChatBedrock
llm = ChatBedrock(model=model_id)
## Graph State
from typing import TypedDict
class State(TypedDict):
query: str
web_search: str
final_answer: str
def search_web(state: State):
print("SEARCHING WEB")
search_tool = TavilySearchResults(max_results=2)
web_results = search_tool.invoke(state["query"])
return {"web_search": web_results}
def aggregator(state: State):
print("AGGREGATING RESPONSE")
prompt = f""" Your job is to summarize from the context provided to you. the context includes information from web search: {state["web_search"]}"""
final_answer = llm.invoke(prompt)
return {"final_answer": final_answer.content}
parallel_builder = StateGraph(State)
# Add nodes
parallel_builder.add_node("call_websearch", search_web)
parallel_builder.add_node("aggregator", aggregator)
parallel_builder.add_edge(START, "call_websearch")
parallel_builder.add_edge("call_websearch", "aggregator")
parallel_builder.add_edge("aggregator", END)
agent = parallel_builder.compile()
state = agent.invoke({"query": query})
return (state["final_answer"])
Tool 2: Bedrock Agent for answering Agentic AI memory queries
Below is a Bedrock Agent invocation call wrapped in a tool.
@tool
def agenticmemory_intelligence(query):
"""
Answer queries related to Agentic memory and AI Agents
Args:
query: User query on the topic of interest
Returns:
summary of the content
"""
print("CALLING KB")
load_dotenv()
region = os.environ.get("AWS_REGION_NAME")
bedrock_agent_runtime_client = boto3.client("bedrock-agent-runtime")
session_id = str(uuid.uuid1())
end_session = False
enable_trace = True
agent_id = os.environ.get("AGENT_ID")
alias_id= os.environ.get("AGENT_ALIAS")
# invoke the agent API
agentResponse = bedrock_agent_runtime_client.invoke_agent(
inputText=query,
agentId=agent_id,
agentAliasId=alias_id,
sessionId=session_id,
enableTrace=enable_trace,
endSession= end_session,
)
event_stream = agentResponse['completion']
try:
for event in event_stream:
if 'chunk' in event:
data = event['chunk']['bytes']
agent_answer = data.decode('utf8')
end_event_received = True
return agent_answer
# End event indicates that the request finished successfully
else:
raise Exception("unexpected event.", event)
except Exception as e:
raise Exception("unexpected event.", e)
Create and Invoke the Strands Agent
Now, create the Strands agent and associate the tools. You can the invoke the agent with questions. Strands agent will invoke the Bedrock agent for Agentic AI memory questions and Langgraph agent for generic questions
### Create a tool list
tool_list = [general_intelligence, agenticmemory_intelligence]
### Create the Strands Agent
agent = Agent(model=bedrock_model,
tools = tool_list,
system_prompt=system_prompt,
conversation_manager=conv_manager
)
### Invoke the agent with general question
response = agent("who is usain bolt")
### Invoke the agent with agentic ai memory question
response = agent("what are the agentic memory types")
You can find the solution in this git repo — https://github.com/thandavms/strands_agents/tree/main/workshop/3.multigent_fwks
Closing thoughts:
In the above example, we created a multi agent workflow with less than 100 lines of code leveraging Amazon Bedrock Primitives (Models), Amazon Bedrock agents and Langgraph Agent. Strands agents are however compatible with models from Anthropic, Llama, Ollama, and custom providers via LiteLLM. Once you build your Strands agent, you can now deploy them on AWS Lambda, Fargate, EKS or EC2.