Resources

Positron Assistant for Developing Shiny Apps - Tom Mock

Positron Assistant for Developing Shiny Apps - Tom Mock (Posit) Abstract: This talk will explore building AI Apps with a focus on Positron Assistant for Shiny developer experience and in-IDE tooling for accelerating app creation. This talk will discuss tools like ellmer / chatlas / querychat / shinychat and compare it to Positron Assistant. Resources mentioned in the presentation: * Positron - https://positron.posit.co/ * Positron Assistant - https://positron.posit.co/assistant.html

Sep 29, 2025
25 min

image: thumbnail.jpg

Transcript#

This transcript was generated automatically and may contain errors.

My name is Tom Mock, I'm a product manager here at Posit, and I oversee the Positron IDE, as well as our studio and Posit Workbench, as well as AI integrations into those tools, both in kind of some of the add-in space for our studio, and we'll be talking about today with Positron via Positron Assistant.

I think with a lot of the tools that we've talked about today, there's kind of the integration points, as well as the applications you might build out of that. With Positron Assistant, and what I'm talking about in my section, it'll mostly be focused on the developer tooling. So what is the experience that you as a data scientist, data analyst, applied scientist are doing inside your IDE?

What is Positron Assistant?

Positron Assistant looks something like this inside of Positron. If you're not aware of what Positron is, it's a brand new IDE that we've released into general availability on desktop, and that's coming out in Posit Workbench here shortly.

But Positron Assistant is a subtool within the Positron IDE that provides kind of a unified chat interface with different backend model providers. It's a GNI client, native to Positron, a client meaning that Posit doesn't actually see any of the data that you're sending. It's purely between you and the model provider you choose.

And to that point, it's built off this idea of a future with your bring your own model provider backend. So I'll talk about a little bit more, but right now it's very anthropic centric because of how good their cloud models are at code generation but we layer on our own system prompt and other contexts on top of that. And ultimately we expect that different customers, especially like yourselves or different user groups would actually be bringing some of your own potential self-hosted models or routing to many different model providers.

Part of the play here, part of the kind of reason why we're investing so heavily into this space is there's lots of great extensions out there. You can use Google Gemini or AWS Q or Klein or all these other, Claude code, all these tools inside of tools like Positron. But a lot of them, if not all of them are missing the context of what you do in memory or in the ephemeral state beyond just the files on disk. And I'll talk a little bit about what that means.

Again, if you wanted to try this out you could download Positron desktop today and try it out in your desktop environment. Or if you are using Posit Workbench it'll be available there shortly in our upcoming release that is due out in days to weeks.

Model providers and data privacy

As I mentioned, part of the story here is that the AI space is moving really rapidly. And so there's lots of different models out there, different model providers. And again, Positron Assistant is trying to be a general unified interface to them. So today that means we support two providers. We support Anthropic directly via API key for chat. And then we have GitHub Copilot for inline completions. We're also exploring GitHub Copilot for chat modalities along with a host of other providers, your Databricks, Snowflake, your AWS Bedrock and the like that provide models both from Anthropic and others.

Lastly, and again, I wanna emphasize this in terms of for this group of people in terms of some of the data privacy or potentially cost efficiency, I've talked to a lot of different people that also are self-hosting their own models whether that's for, again, privacy reasons or very specific datasets are not allowed to go out to these cloud providers even with the contracts you have in place. So we also wanna support OpenAI compatible endpoints in the future to allow you to basically bring your own model. And that's intended to be used with high quality models but like with models that you are serving up or providing access to.

While I've talked a little bit prior about it being like the sidebar chat modality, it also offers inline chat. So again, even without code completion because some of these model providers don't offer an inline completion model, there's still support for things like inline chat. And importantly, whether you're working on the sidebar chat or the inline chat, you still have access to all of the custom contexts I'll talk about in a second.

And you might have an example of like I have a source file, I can open it up and I can say, ask a question and it will actually inject a little portion that then opens up a temporary chat just within that line of code. And it grabs that line of code you're working on and a little bit of extra context around it as well as all the in-memory context that things like your console in R or Python or your in-memory data frames or other objects you've defined.

Enriching context beyond files on disk

So again, at the very beginning kind of coming back around to this, I said, part of why we're doing this is we want to enrich the context beyond the files on disk, right? All the AI tooling out there within an IDE largely says, hey, you're working in a source file, we take that source file, package it up, send it to the model as a prompt along with your ask and then you get a response back.

Positron goes a step further and uses a technology called tool calling internally to provide additional tools that the model is aware of. These tools could be things like, hey, if the user asks about something like the current session, right? The model can ask Positron, what is the user working in, right? They're working in an R environment and we can report back information like, oh, your R console is R version 4.4.3 and inside of your session, right? Not just the actual console, but inside of your session, you've defined a data frame like this Olympics dataset and here's the schema, how big it is, the column names, the number of rows, other information that's loaded there in memory.

And we can repeat this for many other kind of tools that are available inside of Positron, like the plot pane, session input and output, you know, the console history and other tools that we wanna make available over time within the IDE. But importantly, this type of novel context is going beyond what a lot of the other AI tooling are providing in addition to all of the like source files and the training data and other things that are available outside of the memory context.

But importantly, this type of novel context is going beyond what a lot of the other AI tooling are providing in addition to all of the like source files and the training data and other things that are available outside of the memory context.

Importantly, this also works for Python, you know, Positron is intended to be a polyglot IDE, so the things you can do in R, you can also do in Python, the things you can do in Python, you can also do in R. So you could ask a similar question and it can report back about a Python console as well as the in-memory state on that.

In-memory data frames and runtime errors

Now, to dive in a little bit further, now that we've kind of defined this idea of tool calling, you could imagine I might be building applications or might be doing an analysis and I'm working with specific datasets that have been defined, right? So when I'm asking questions, it actually can ask about, you know, what is this data frame I have defined? This new data frame contains these columns and it kind of describes them or has information about them, the encoding, the schema of that data. So again, rather than you having to manually encode this into your source file or to manually, you know, write it out in your literal prompt, like for column one, which is MPG do X, you could actually just have the model passively know this information as you're asking other questions, where that better context should lead to better results or better kind of outputs from your model.

The other example that I think is really neat is because you have an active console, right? There's context there. And one of the ways I like to highlight this is kind of nuanced is you can imagine all IDEs, right? Can say you have a static analysis error in your source file, meaning you've done something like misspelled a function name. And even without LLMs, right? Your IDE can say, hey, there's a problem here. You should probably fix it.

However, if a data frame or a data object is missing a column, your script doesn't know that. And the language server protocol or the layer on top, the static analysis doesn't know that. But when you run that code, basically these runtime errors versus kind of linting errors, that's where positron assist can also help out. Where as you're building up an analysis or debugging something, it has context to not only what is the source file showing, but what are the error messages, the literal trace back that's in the console. It can pull that in and say, oh, I see the source file in the lines you're working on. I see the error. Let me help you fix that within all of that context. And again, both R and Python, not just limited to one language.

Plot pane and session state context

You can also go one step further. And for a lot of models out there, they can interpret graphics or images. So we make use of this for viewing things like plots, where not only can the model and positron assistant say, oh, we have source code to generate a plot and we can interpret and work on that, the actual source code of how you got there, but the actual plot that you generated can be included as context. So it could help with things like, oh, well, this looks overly compact. Do you want to change the dimensions? Where your source code may not have anything to do with the XY coordinates or the size of the chart, but we can see in the plot pane that the graphic is too small or what are the other ways of showing a distribution or this data in a different way, or help me generate alternative text or a description of this plot and many other situations.

And then lastly, around the session state, there's all that information about what language is active in the console, R or Python, what version, what packages are available in terms of they've been loaded or available in the environment. What was the last execution or the error? Help me fix those things. So all of these little pieces of context when you add them together, give you a lot more context to provide to the model to generate better results based upon what you're actually working on. Rather than having to manually specify, here are the 10 packages that I want to use, it has some of that context passively.

Importantly, while you could imagine some of this data, you're like, wow, that's a lot of data, but I'm pretty sensitive about what data is going across. Again, importantly, this is between you and the model provider. So you've already vetted a model provider to allow you to ask questions or you have some type of contract or negotiation with them. So importantly, like we as a company as Posit, we're not trying to inject ourselves in the middle or really think about like hosting models, we're providing a client. So that relationship is between you and the model provider and the data that's being transferred back and forth. So it's only being sent to the large language model API, not to others.

Inline error correction and console actions

I'll also talk a little bit about a roadmap item that we've been working on. I talked about how you can fix things in the Posture and Assistant sidebar chat and ask a question, but we're also injecting directly within the console fix and explain actions. So this basically says like, as soon as you see an error, there's a little shortcut you can do to help resolve that error or explain what's going on if you can't interpret the trace back kind of ad hoc.

So in this exact case, I've clicked on fix that and it says, oh, well, here's the trace back, here's a little context of what we're actually trying to fix in the error. And then it says, okay, along with that, I've built out the structure of what the project you're working on. I see the file that you're working on and here's an interpretation of the error message and how to resolve it. Basically, not only just solving the problem for you, but helping you understand, oh, in the future, if you see something like this, here's the problem state or why it was occurring.

Shiny extension and chat participants

Lastly, I also told Phil that, you know, I talked a little bit about data apps, and I think it's important to call out that, you know, we have other expertise here within the company, right? Like we've been developing Shiny for a long time. And so part of what we're doing is showing examples of how you could take the Positron Assistant and extend it further. So the Shiny extension within Positron actually contributes what's called a chat participant.

And this allows you to make the model even better at developing Shiny applications without fine-tuning it. It just provides a lot of context and examples to the model to say, hey, here's how to design a good application with best practices as of today, modern Shiny app development.

You can ask it via this at Shiny in the chat and then ask it a question. And here I'm taking a script I have in R and saying convert it to an application. And it will assist you in writing a brand new app, or you could take an existing application, refactor it or improve upon it, or modernize it, for example.

So this is, again, providing a lot of additional context and you as a individual or as a user or user group, you could imagine you could actually write your own chat participants, or there's other kind of integration points where you can add in novel context via MCP servers or other approaches to add context in or add approaches that you are taking within your company.

Taking the bigger picture, you could see that like this example I did, I basically took an R file, one shot an application, and then generated it, it put it into an app.R, and then I just clicked run. And now I can see the application side by side with my source code and the chat that generated it. And I can further iterate from there. Again, trying to make it where you obviously can do debugging or kind of altering of your source file. And in real time, just as you would do inside of Positron, build up your application, see changes in real time, and then continue to refactor, basically assistants trying to help you out throughout the process.

Roadmap: agents, more providers, and privacy

I see a lot of good questions in the chat, so I'll try and kind of briefly touch on a few items for the future and then answer those questions there in the chat. But I would say what we've been working on as well are further modes or what I would call agents, right? Assistant is very much like you ask, you get an answer, and it has an agent mode where it can go do things, but we're also working on tools to actually help you with exploratory data analysis in more of an agent way.

So I think there's a blog post out now, and I can post into the chat about data bot, Phil. And so that talks a little bit about what we're doing in that area. And I'll also mention that, again, we're trying to bring additional model providers to play here. So we have Positron Assistant is supported via Anthropic directly today, but we're actively working on Copilot Chat, as well as we plan to bring AWS Bedrock, Snowflake, Databricks, and other providers in the near future.

And lastly, again, there's some questions about data and privacy, right? We ultimately think that for a lot of folks, in y'all's case, in y'all's industry, we'll actually be doing routing of access via centralized service, or might even be self-hosting foundation models for your company in a way where you don't even have to send it outside of your privacy or your firewall.

Q&A

I'm gonna go back through and answer some of the chat questions. I think I have two minutes, Phil, to go through that. Yeah, we've got a ton of questions, Tom. You're our last speaker, so feel free to take these. If you want me to help you read them out, I can, or if you wanna just read them in the chat, that's fine too. So yeah, take them in.

There's one question, which is expected timeline for AWS Bedrock integration. What I would say is we currently are testing that internally. We actually use a lot of AWS Bedrock internally for our own processes. So I would imagine that is later this year in 2025 is our goal.

And we have other providers, again, where I think OpenAI compatible endpoints as kind of a general approach, and Copilot Chat are also very much the next set of model providers that we're looking at. So those are very top of mind for us.

There's a question about, does Postron store or log context somewhere, or is it only sent to the LLM API? I do wanna acknowledge that locally within your laptop or inside of Workbench, that context is stored in logs. Those are, again, your logs. They don't go to Posit or to an external service. Those are specific to your environment. That's just a side effect of you're asking questions, and there has to be a way to debug it for you locally. You can choose to include portions of those and expose them at like a debug or a notify level and expand or decrease the logs, but that's, again, your own data.

Another question, visibility of the plots is really cool. I can see a value for use case where you don't want the large language model to see anything but code. Can you wall off plots? So you can turn off specific tool calls. There's some UI elements for that. And I would say long-term, while we're kind of building out Positron Assistant and really excited about it, we know that there are needs for both like administrative controls for a group as well as like turning on or off certain features. So you can make use of the models in Positron Assistant in a certain way, but maybe not expose everything on certain projects.

How is Positron Assistant different than using the ellmer package? What I would say is by the fact of it being natively built into Positron, it doesn't rely on an R runtime and like an active R session to execute or send the prompts back and forth. ellmer is really powerful for being kind of a low-level package for building into things like Shiny applications with shinychat or querychat or other approaches like that. But ultimately, having it native within the IDE means you just have to worry about my login credentials, then you're good to go. You don't have to worry about building up all the context or managing prompts or all that things, it's batteries included.

What are the benefits of using Positron Assistant over GitHub Copilot and VS Code? So one, I wanna mention again, we are integrating Copilot Chat into Positron Assistant. That's some of the work that we're exploring right now. And ultimately again, Copilot is trying to solve for a problem that is general software development, right? It's remarkable, right? Just like there are other AI tools out there that are really, really good, we're basically trying to expose those same models with additional context and making it within Positron and the other surfaces for doing data science work, right? We're not trying to say that Positron is only there for AI capabilities, but more of it's building it for a data science user and all the things you need like your console, your environments pane, your plot pane, the ability to run applications, explore data, and also having a fantastic AI experience that makes use of the best models and has the best context it can.

We're not trying to say that Positron is only there for AI capabilities, but more of it's building it for a data science user and all the things you need like your console, your environments pane, your plot pane, the ability to run applications, explore data, and also having a fantastic AI experience that makes use of the best models and has the best context it can.

The shiny chat demo in Positron just sold me to make a switch. That's great, yeah. It's like, we're really trying to make this a fantastic experience and really integrate it into the workflows that we think are important. But as I'm calling out here in section two, there's also gonna be novel context or novel approaches you might have within your organization, within your company, or with even just your personal approach. So we do wanna make this extensible via text files, via extensions, chat participants, or even external context via MCP and the model context protocol.

VS Code has an agentic option for Copilot. Positron also has that. So you can switch between modes of ask, edit, or agent. Those have different levels of kind of tools that are available to them and kind of different things they can do within the IDE and different power. And you can even define your own custom chat modes where you might define one that, again, has your own addition to the prompt that you want to add in. So that's also supported.

Positron, let's see, will it be available as an extension for other IDE or only available within Positron? Positron Assistant is natively built into Positron. So it really is only functional within Positron and makes use of all the Positron-specific APIs. So it doesn't really make sense for us to extract that right now. It's a tool for Positron.

But again, you can layer on top other tooling that you'd wanna use. Like if you have, well, Positron Assistant can work in your shell, right, in your terminal. I've heard of customers and seen them use some of the CLI tooling like Cloud Code alongside Positron Assistant within your IDE. So you get the best of data science, the best of data science-specific chat. And then you also have really, really strong like Cloud Code integration for those agentic, like build me out an entire project with more software development focus.

Will Positron come at an added cost? Positron is free on desktop, right? So you could just download and try that today. And if you do have Posit Workbench, Positron is available through your existing subscription. So there's not an additional charge there. Positron Assistant is also free, but you do have to pay somebody to act as your model provider. Importantly, that's not Posit. We're not trying to receive your data, but more of whatever provider you want. And today that's Anthropic, but we are working on additional model providers.