Resources

Building Agentic AI applications with Positron and AWS Strands Agents (Greg Headley & Shun Mao, AWS)

In this video, Greg Headley and Shun Mao introduce developers to AWS tools designed to support the creation of agentic AI applications. posit::conf(2025) Subscribe to posit::conf updates: https://posit.co/about/subscription-management/

image: thumbnail.jpg

Transcript#

This transcript was generated automatically and may contain errors.

My name is Greg Headley. I'm a Partner Development Manager at AWS for Posit. I'm excited to be here and talk to you all this morning to show how developers and data scientists can use AWS tools and Posit tools to build agentic applications, and really go from laptop to production-grade agents with governance and scale.

In a minute, I'm going to hand it over to Shun Mao, who is our Partner Solution Architect at AWS. But I just first wanted to kind of take a step back and share the vision of AWS and the agentic AI space. So not only are we building an AI infrastructure that's capable of supporting thousands to millions of agents, but we're also investing in making AWS the best place to build and deploy the world's most useful and most performant AI agents.

And we do that through providing choice. There's a lot of things on this slide, but what I wanted to capture from it is really just making the comment that we meet you where you are.

We're doing this through kind of three different ways. There's three kind of areas that you can think about it, that one is sort of the specialized or ready-to-deploy agents, another is through managed services, and then DIY, which we'll talk about here in just a minute. And so whether you're looking to quickly deploy agents to pre-build agents to boost productivity, experiment with open source tools, or build a fleet of sophisticated custom agents, AWS is providing the models, the tools, the infrastructure, and expertise to help you succeed.

Introducing Strands Agents

And in May, we announced the launch of Strands Agents, which again we're going to talk about in just a minute and actually show you, which is an open source SDK for building agents using just a few lines of code. Strands is very simple to use and takes a model-driven approach, leveraging the reasoning capabilities of state-of-the-art language models, instead requiring developers to develop complex workflows, which in turn allows developers to just simply define a prompt and a list of tools and let the LLM power agents autonomously handle planning, chaining thought, and tool usage.

which in turn allows developers to just simply define a prompt and a list of tools and let the LLM power agents autonomously handle planning, chaining thought, and tool usage.

That's my two-minute commercial. I wanted to now hand it over to Sean to really stop the talk and start the walk and show you what we're building.

Thank you, Greg. Hi, everyone. Next, I'm going to go a little deeper about Strands Agents. As Greg has pointed out, Strands Agents actually is an open source SDK that AWS has released to help developers to build the agentic AI applications with very minimal coding, actually.

What I think Strands Agents bring is that it actually simplifies the complexity of agent development by bringing three core components together, namely prompt, LLMs, and tools.

So, for the prompt, actually, you can customize your prompt in however way you want. You can let the agent do any role you want them to be playing. For example, the agent can be just answering some simple questions, can generate code, it can also do complex data analysis, or maybe let's help you to do your financial portfolio planning.

So, for the model part, Strands Agents actually integrates with a wide variety of large-length models. For example, you can use Amazon Bedrock models, which include Anthropic Cloud models, Cohere models, Amazon's own Nova model series, and many more. Also, if you are worried about security of your data, you can also integrate with your own local deployed model through our LLM framework, and also OpenAI models, of course, through the LLM framework. So, there are a lot of great model choices over there, so you can choose which one you like the best.

What's most exciting for me is the tool integration with Strands Agents. So, Strands Agents actually has a lot of tool choices. First, it has a lot of built-in tools, such as calculator, file operation tools, shell command tools, HTTP client tools, video and image processing tools, and many more. What's great about that is also you can easily convert your daily use Python functions into the tools that Strands Agents can use.

I know a lot of data scientists and data analysts are here. I think daily you're using probably a lot of Python, your own written Python functions. You can easily convert those Python functions into tools and let the model decide at what step, what tool it's going to use to do your complex data analysis.

There are some other features I don't have time to go through right now, such as enterprise-level deployment support, real-time streaming, as well as multi-agent architecture support.

Live demo in Positron

So, now let's quickly hop into Positron, the IDE that we're talking about these days, and see how we can easily build an agentic AI application with Strands Agents.

So, I have three tabs, actually. I want to go through the first tab just to show you how you can easily build these agents with very minimum coding. So, I'm not going to run this notebook live for the time sake, but I will later, after I show this notebook, I will wrap all the code into a .py file, which is a StreamLib app file, and run this StreamLib app, and we're going to test some prompts over there.

So, if you have some takeaway after this talk, the only takeaway you probably will have is just to pip install these two SDKs. So, that's released by AWS, which is called Strands Agent, Strands Agent Tools. Of course, I have some other packages that I used here. So, for example, I have Pinecone to create a knowledge base.

So, this is the line that I'm talking about. So, there's two packages that you need to install. That's all you need. And, of course, you can have some auxiliary packages to help you build your Python functions.

So, let me quickly go through the key steps. The first step is to set the AI model. In this case, I chose Amazon Bedrock model. There are a lot of model choices over there. In this case, I have a series of Anthropic Cloud models. So, for example, in this case, I choose a Cloud Sonic 4 model. And the way to wrap the model is to use this function, Bedrock model. You just put the model you selected over there and specify the region.

And let's create a first, simplest agent over here. So, for the first agent, our demo is without any tool. There's just LLM and with a system prompt. There can be an agent, right? But it doesn't have any specialization. It's just using a model's own knowledge. For example, the way to do that is to just use this agent class. And you wrap up the system prompt. And there you go. And you can ask the prompt. And it will give you the answer. So, that's the simplest one.

What's getting more exciting is that we can start adding more tools. So, this is a high-level architecture. So, the LLM will derive the steps dynamically. You know, what kind of tools you're going to use at a certain step and accomplish the specified task. So, the way to import the built-in tools is through this package. So, in this case, I import the calculator. The way to embed the tools into the agent is through the tools parameter. And you put this tool name into this list. And you give the model you want to use. And you put everything into this agent class. And there you go. That's an agent already.

So, if you ask a mathematical question, it will utilize the tool. So, we all know that a lot of times the LLM itself is not good at doing mathematical calculation. So, that's why you can give the calculator tool to do a more precise calculation.

Converting Python functions into tools

Next, I'm going to show how easily you can convert a Python function into your own tool. In this case, I have a very simple function, probably the simplest, which is getting the current date and time. And you probably also know that LLM usually has a training data which has a cut-off date. So, it does not get you probably the most updated information and time. So, that's how you can complement that by writing some small functions. In this case, it's the time function.

I want to actually point out how LLM can recognize this tool. It's all within this dot screen. So, you need to write down the description of this tool, what it does, what kind of parameter, what kind of format or output it will spit. In this case, LLM can recognize how to use this tool. So, in my case, it's very simple. I just want to get the current date and time in certain formats. And I'm utilizing one of the Python packages at the time to return the time.

Next is the weather. It's even easier. This one, I'm not calling any API. I'm just returning a simple screen. But you can use actually a third-party API to really get the weather. But this is how you let LLM to know that's the weather tool. The way to have multiple tools in one agent is easy. You're just putting this tool name into this list and give it to the tools parameter. And there you go. And you can ask what is the current time, date and time, and weather. And voila.

So, it utilizes these two tools at two steps. So, it will get the answer for the time and date and weather. So, at this step, I think it's already becoming quite powerful because you have a lot of Python functions to do data analysis, complex data analysis, or processing. You can convert those functions into the tools. What you need to do is just have this decorator, give a description, and there you go. Maybe you already have that description in your Python function, right? If you have a good Python programming habit.

Calling third-party APIs and knowledge bases

So, next, I'm going to demo how you can call third-party APIs. A lot of times you need to query third-party APIs to get some information. It can be a database API. It can be, in this case, a web search API, which I've implemented through DuckDuckGo. It's a kind of open source, free tool to query the web. It's basically a web search. The way to do that, it's the same. It's through the Python function. But in this case, I'm importing this DuckDuckGo package and do the query. Yeah, it's very simple. It's just a simple API call, and you get the result. That's a tool already.

With that tool, you can actually ask the questions that is related to the latest information happening right now. For example, I tested this 2025 World Athletes Track and Field Championships 100-meter men's race. I'm not sure all of you know this answer. It just happened a few days ago. If you search that, it will perform this search by utilizing this web search tool you have just defined. It will search through this web and extract the most relevant information and return you the result.

The answer, actually, is this championship happens in Japan in this week, I think. Yeah, there is another Jamaican athlete who won the gold medal here.

Next is through a knowledge base. How to call a knowledge base? A lot of enterprises, they have their own proprietary data that you cannot find on the Internet. In this case, you need to put those data into a knowledge base. In this case, I'm using another AWS partner, Pinecone, as an example, because I've saved some unstructured data into Pinecone as vector databases. The way to do that is also similar. First, you need to supply this knowledge base's APIs and give the index name and also the code how to call that vector database. You wrap everything up into this function.

You can customize your prompt. You can put everything into one agent, all the tools you have. Let me quickly go through the code for the screen, because I don't have time. You just put all the code I have shown in this pipe file, all the tools, all the definitions of agent. The system prompt. Then you have all this simulated UI design portion, which is unrelevant to the agent.

The cool thing about Positron is I can run that live within Positron. This is the app I've talked about. I wrap everything up. I have all the tool lists here. We can ask some questions. I'm going to ask one question here, because I don't have time. What's the square root of, let's say, a huge number here? Behind this thing, it's calling a tool, because I've given the calculator tool to this agent. It's performing the calculation.

I can demo a lot of prompts. I don't have time today. The key point is that you can use it right now to build the agent you want to do data analysis or data processing. It's very, very easy for you to do. Thanks so much. That's what I want to demo. Thank you.