Strands Agents – An Open-source
python SDK for building agents
According to Gartner, over a
third of all enterprise apps will be powered by Agentic AI by 2028. This evolution isn’t a roadmap of the future.
It’s already happening today.
There are many ways of building
agents on the AWS platform. The key proposition is to meet the customer, wherever
they are in their agentic AI journey, whether through out-of-the-box agents,
custom development or options to build DIY agents, or a combination of these.
· You can use out-of-the-box Specialized Agents with
Amazon Q – customers who are looking to immediately deploy agentic
experiences with minimal technical overhead, Amazon Q Business and Amazon Q
Developer allows you to immediately test and deploy agentic AI or further
customize to meet specific needs of your business.
· Or build your Agentic AI application with Gaurdrails
and fully managed for you using Amazon Bedrock. Fully Managed agents that
you can build, that integrates with your systems and data, and tools, giving
you the flexibility to test different foundation models in a secure managed environment,
with a comprehensive toolset to build, deploy, operate, maintain, and scale
trusted, high-performing AI agents in Amazon Bedrock.
· Or DIY agents by providing a model-driven
approach to building AI agents in just a few lines of code, using Strands
Agents.
What are Strands Agents?
Strands Agents is an Open-source python SDK for building agents using just a few lines of code. It takes a model-driven approach and uses the automated reasoning capabilities of models to build agents. It allows the agent to perform complex, multistep reasoning and actions, and is built for developers by developers, and open-sourced by AWS.
· Strands simplify agent development by embracing
the capabilities of models to plan, chain thoughts, call tools, and reflect. Like
the two strands of DNA, Strands connects two core pieces of the agent together:
the model and the tools.
·
Get started quickly: With Strands,
developers can simply define a prompt and a list of tools in code to build an
agent, then test it locally and deploy it to the cloud.
· Model driven approach: Strands plans the
agent's next steps and executes tools using the advanced reasoning capabilities
of models.
· Highly flexible: For more complex agent
use cases, developers can customize their agent's behavior in Strands.
·
Model Agnostic: Strands can run anywhere
and can support any model with reasoning and tool use capabilities, including
models in Amazon Bedrock, Anthropic, Ollama, and other providers through
LiteLLM.
· Deploy anywhere: Deploy and run agents in
any environment where you run Python applications and deploy on ECS, Lambda,
and EC2.
· Built-in MCP: Native support for Model Context Protocol (MCP) servers, enabling access to thousands of pre-built tools. Strands also provides a natively a number of pre-built tools, examples: image_reader to process and analyze images, use_aws to interact with AWS services and http_request to make API calls, fetch web data, and call local HTTP servers.
Core Working Principle: At the
heart of Strands' capabilities lies the agentic loop, a continuous cycle where
an agent interacts with its model and tools to accomplish a task prompted by
the user. This loop leverages the remarkable advancements of LLMs), which can
now reason, plan, and select tools with native proficiency.
In each iteration of the loop, Strands
engages the LLM with the user's prompt, agent context, and a description of the
available tools. The LLM can respond in various ways, including in natural
language for the end user, outlining a series of steps, reflecting on previous
actions, or selecting one or more tools to utilize. When the LLM chooses a
tool, Strands seamlessly executes it and returns the result to the LLM. Once
the task is complete, Strands delivers the agent's final outcome.
Join the Strands Agents community
Strands Agents is an open-source project licensed under the
Apache License 2.0. Contributions are welcome to the project, where developers
can add support for additional models and tools, they can
collaborate on new features or expand the documentation. If they find a bug, or have a suggestion, or have something to
contribute, they can join the project on GitHub.