Resources

Get the Latest on Posit's Commercial Products | posit::conf(2025)

Get the Latest on Posit's Commercial Products Speaker(s): Kelly O'Briant; Tom Mock; Joe Roberts; Kara Woo; Alex Chisholm; Chetan Thapar; Steve Nolen Abstract: Join us for an overview of the latest developments across Posit’s commercial product ecosystem. This session will cover Posit Workbench, Package Manager, Connect, Connect Cloud, and our growing portfolio of managed services including Snowflake and beyond. Hear directly from the product managers and engineers who are building these tools, and get insights into what’s coming next. 0:00 Introduction 3:30 Audited jobs, Positron Pro sessions, and GenAI in Workbench 14:20 Auth and integrations with RStudio Pro sessions in Package Manager 21:00 An intro to Chronicle for Posit Team 25:57 Building container images in Connect 39:50 Organization plans in Connect Cloud 49:00 A Snowflake Native App offering for Connect and Workbench 1:02:00 An intro to Posit Team Dedicated posit::conf(2025) Subscribe to posit::conf updates: https://posit.co/about/subscription-management/

image: thumbnail.jpg

Transcript#

This transcript was generated automatically and may contain errors.

Today you're going to hear from an illustrious group of colleagues here at Posit that I've collected and they're going to talk about some of the newest and most exciting things that we've been working on around Posit Team and our related offerings. To kick this session off, I want to talk a little bit about why we've built these things in particular, Workbench, Package Manager, and Connect. And I want to ask and answer the question, why do they exist?

Well, one of the reasons they exist is our mission. If you're here at Posit Conf, you likely know about our mission at Posit, which is to create free and open source software that we fund this mission through the sales of our commercial product offerings.

And I personally, but we all hope that this mission resonates with you, but it doesn't answer my question, which is why do we build what we build? If our goal is to create products that are loved and valued by all our customers, we have to start, I think, by acknowledging the value that you can get from open source at no cost. You don't need to buy anything from us to get this value. You don't even need to be an expert in data science or R or Python to start getting value from open source. So how do we add on top of that? It's a very high bar. How do we build things that can add value on top of something that's already so powerful?

Well, our approach is to go about this by solving problems for organizations, particularly organizations that have questions about how to realize the value of open source at scale. And these questions tend to come to us in two flavors. There's the practical concerns that are coming from our users, and those folks want to know things like how will they get access to data? How will they make use of AI? How do they distribute their work, automate things, collaborate with others, and integrate across other systems that they need and want to use at their organization? And then we've got compliance concerns. These folks like IT and SecOps groups who want to know, like, what is this that's happening inside of our organization? How do we secure and support these systems? How do we audit, govern? How do we control usage? How do we control costs? How do we manage the risk that this is going to bring inside to what we're doing?

So you're going to hear throughout the session today these themes play out in basically every talk, and we've got many talks in this session, because this is the reason why our commercial products exist. They aim to be the solution or a collection of solutions to these concerns. So speaking of our session today, it's going to be jam-packed. There likely will be no time for questions at the end. So please come see us outside this room at the lounge. We've got a bunch of product booths, and we'll all be there today and tomorrow to talk about what you'll see here today, other things that you don't see us talk about today, and whether you're a good fit for our commercial products. So without any further ado, let me bring up Tom Mock to talk about Workbench.

Posit Workbench and Positron

Thanks so much, Kelly. All right. My name's Tom Mock. I'm a product manager for Authoring.org, so I overlook the Posit Workbench product, as well as the individual IDEs for RStudio, Positron, and our Gen AI integrations into those IDEs. I'll be talking a little bit about kind of all the above on Workbench and the different IDEs we build into it.

To Kelly's point, I also want to start a little bit with just the pillars of Workbench. Again, like, why does it exist? Why would you move off your laptop? What kind of things are we building that provide value there? So ultimately, Posit Workbench provides three main pillars. There's scalability, right? My local laptop's just not big enough. My data is too big. I need to parallelize more. I just need more computational resources that are available. Secondly, I might just need, you know, different data access, right? I'm not allowed to access specific data sets because of patient information or privacy rules, or just it's really complicated to try and access certain data sets on my local laptop. So Workbench has what are called managed credentials, or really OAuth integrations, so you don't have to worry about access tokens or making requests for data. You can pass through your data governance rules into Workbench.

And the third, as Kelly mentioned, one of our users is actually an administrative user, right? Not the data scientist doing the work, but the administrator who's providing a Workbench for many data scientists. How do you manage hundreds of computational environments all together without losing your mind? You need observability, central governance, central controls, and to kind of adhere to all of your compliance needs. What I have here on the page is really the Workbench homepage, and we just did a major overhaul of that homepage to make it much more project-oriented. Again, adhering to some of the basic standards that we say are best practices. Jonathan talked about how project-oriented workflows are great and ideal, and we want to support that in Workbench as well, but you can always just create a session that's not even attached to a specific folder if you just want to get started quickly.

Workbench also supports, you know, all these IDs that you already love, right? So you already have RStudio, JupyterLab, and we support VS Code, and we've now moved Positron into a session type within Workbench. As far as new features we've added, we've also expanded kind of the ability for admins to observe or to understand how the Workbench service itself is operating. So we had Prometheus metrics in preview before. We've now moved them into general availability, and this allows admins to understand, is the server healthy? How is the user experience there? What are the type of requests coming in? Which ID sessions are in use? Really just getting the snapshot of real-time observability of the product. This can be ingested really anywhere that can take Prometheus metrics. The dashboard I'm showing here is just through Grafana, but you can use really any of the APM tools that you'd like to.

And ultimately, again, this is part of that value we're trying to provide, is rather than a bespoke kind of DIY, we've got all these different environments that have to manage, it's really just one service that I'm looking at for all these different users. Sticking within this realm of kind of observability or auditing, we've also exposed some of these features to end users, where we've long had the ability to launch what are called Workbench jobs for scalability reasons, saying, hey, I'm running in this script, I want to parallelize it further. Workbench can provide access to things like Kubernetes or Slurm or other HPC clusters for scaling purposes and just going larger.

This is a variant of that called audited Workbench jobs that actually captures a lot more additional metadata. For some of our customers and things like regulated industry, they have a legal requirement to be able to say, hey, I can reproduce this analysis, or I at least need to be able to understand how I got these results. It's not enough to just have the source code, I need to understand the environment that was used and capture that in a way where it wasn't modified after the fact. So this gives that power to end users to just do their work, but do so in a way that gives them the ability to capture more of that metadata they need to stay in compliance while staying productive.

One of the most exciting things you'll probably hear at the conf, and you'll hear many different sessions hopefully this week, including the keynote that Jonathan gave, was we've been working on a brand new data science IDE called Positron for over a year now. We've moved Positron into general availability both on desktop, right, for free, and inside of Posit Workbench as a session type. So as of the 2025.09.0 release that went out at the end of last week, this brings the power of Positron directly into Workbench and all the capabilities there.

Ultimately, Positron provides both a home for R and Python users. So it's a data science IDE as opposed to a single language IDE, right. So you have the ability to launch R or Python sessions within Positron IDE. You could actually use both at once. You can open up a console, run code interactively, line by line, or entire scripts, as well as you could work by Jupyter notebooks inside of Positron if that's a workflow you like. And we've built in specific capabilities like a brand new data explorer that kind of allow you to merge some of the UI-driven tools that make you productive, along with these code-first data science that we're emphasizing there.

Lastly, a lot of our users end up wanting to distribute their knowledge to others, right, their business users within the company. So you can of course build your shiny apps in R or Python, or your Streamlet, Dash, Flask, FastAPI, all these frameworks work with inside of Positron. And there's a quick run app button to take a source file you're working on and immediately preview it in the viewer pane of Positron. We also have a publisher extension allowing you to immediately publish it out, you know, really in seconds to minutes to connect once you're ready to distribute it within your company.

And then lastly, kind of moving beyond kind of these core capabilities, a lot of where Positron is able to innovate is by adding in new gen AI integrations. So of course, Positron is wanting to stand alone as a great data science IDE, but in kind of the modern world, you have to have these AI capabilities to stay productive. So we've taken kind of the core data science principles and exposed them in a way that large language models can actually access some of the same resources that you have. So things like, what is the active R console or Python console? What version is it? What packages are loaded? What was the input and output that was in my console just now? Help me fix that. The model has access to all these little component parts, so it's going beyond just the files on disk to actually the ephemeral nature of you building up these scripts or doing analyses together.

Beyond that kind of core capabilities of the Positron Assistant that builds out, you know, ask, edit, answer, code generation, we're also exploring agent workflows with tools like DataBot. Here I'm showing a GIF. DataBot is a way to speed up your exploratory data analysis with kind of an LLM alongside. Rather than thinking of it as, oh, it's generating code that I then click run on, that I then look at the results, this is a human in the loop agent where you're asking questions in natural language, it's generating our Python code and very importantly shows you all that code, but really the output is what's important as you're looking alongside. You can always go back and inspect the code or summarize all the code into a separate document at the end of your session, but we're trying to again see how can we stick true to our code first data science principles while still empowering people to speed up their workflows.

we're trying to again see how can we stick true to our code first data science principles while still empowering people to speed up their workflows.

Positron Assistant is also extensible in terms of, I'll leave it on this slide for now, but ultimately while we're adding in a lot of novel context, again, that's specific to data science, we know from many of our end user customers that they also want to add in things that are specific to their business and while some people might say, oh, you just fine tune a model and then it has all this context, that's really expensive, time consuming, and has to be done very frequently. So, how we're approaching a lot of this is allowing you to add in things like MCP servers or model context protocol to bring in structured data back into your assistant and have it understand here are standards or coding principles or the ways that we approach building out this code, as well as just being able to extend it with even custom chat modes, like maybe you just want to say rather than it being kind of this ask and answer mode, I want to give it a subset of tools that are available for it to call and context that it has available or I want to modify additions to the system prompt to guide its behavior specifically for my fit for purpose. So, we're building out Positron Assistant, still early days, we're making a lot of progress and I'd love to talk to you in the lounge or otherwise if you have questions about that.

Positron AI roadmap

And then closing out, I want to talk a little bit more about the forward-looking view, our roadmap of what we're actually working towards. So, what's coming next with Positron specific to AI? One, we've started working on integrating GitHub Copilot Chat, this is one of the biggest requests we've ever had at the company level for IDEs. So, we'll be integrating GitHub Copilot Chat into Positron Assistant, so you get all that data science specific context, we're able to make use of your existing Copilot Chat licensing, right? So, don't worry about token use, you just kind of make use of your monthly billing. Additionally, we want to expand the general model providers that are available, right? Where Positron Assistant is a client and you're supposed to bring your own model provider backend. For some enterprises, they might have an agreement with Anthropic and they're good to go, right? We support Anthropic today. But long-term, again, we want to expand this to companies and services such as AWS Bedrock or Azure, Databricks, Snowflake, right? All these different cloud providers that have model serving capabilities, we want to integrate those in. And as far as short-term roadmap, Databricks, Snowflake, AWS Bedrock, and OpenAI compatible endpoints, both for self-hosting models or for arbitrary kind of routing services, those are what's on our short-term roadmap for additional providers.

With that, I'll close out my own section and have my colleague, Joe Roberts, come up. He's going to be talking a lot more about building on top of Workbench and Connect with his products.

Package Manager authentication

So, hi. I wanted to talk about one of the newest features that we just released some more support for in Package Manager, one that we've been excited about because it's been long overdue and one of the most popular requests from many of our customers, is support for authentication in Package Manager.

So, there's several reasons why authentication can be useful in any product, especially Package Manager. One, for being able to restrict access to repositories that may not be wanted to use in a public context, even within your own company, to specific teams or groups. The ability to grant more granular access to individual trusted users, your power users who may want to be able to grant individual access to publish packages directly to Package Manager, and even just for delegating other administrative tasks rather than being able to be forcing that all to be done kind of from an IT administrator perspective as well. And, of course, being able to securely integrate with other tools outside of Package Manager in a way that you really shouldn't do in a more public context.

So, we've introduced three types of authentication now with our latest release. One, what we started earlier this year around API tokens, then the latest single sign-on, as well as some identity federation, and I'm going to talk just briefly through kind of the around all three of these areas and what value they bring to the product here.

Starting with token authentication, so this is where an administrator manually creates an authentication token, generates it from Package Manager, shares it with the user to then use to authenticate. Really useful for shared or server level authentication. For example, if you want to just authenticate your Workbench or Connect server as a whole to Package Manager rather than at the per user level with long live keys or other services outside of that there. The nice part about token authentication is that it's really compatible with pretty much all existing tooling. You can just attach it to the basic credentials that are used to access Package Manager. So, existing R and Python client tooling makes it really easy to kind of connect that in there.

But, the limitations certainly there of having to manage all those keys manually can definitely be a headache there. So, with the latest release, we just introduced single sign-on using OpenID Connect and this really allows you to log in with a corporate identity provider that supports OpenID Connect, such as Okta, Microsoft Enter ID, also formally known as Azure Active Directory. But really, I mean, these are two implementations that we've really focused on, but really any OpenID Connect compliant identity provider should work fairly well here.

Big advantage of that is you can actually manage your user and group assignment in your identity provider rather than having to manage them directly within, you know, court setting your users and groups within Package Manager itself, and then just map those groups to the various permission scopes you want to have in Package Manager. Big advantage on the user side, it just allows for seamless login support through, you know, logging in directly through the web UI in Package Manager, the command line interface for Package Manager, and package installation tools themselves just being able to use those standard OAuth-type workflows to log in.

And then finally, we have identity federation, which is kind of a useful way, especially by configuring Package Manager to accept tokens issued by other OpenID Connect providers. So this would be cases like, you know, running a non-interactive job by a CI-CD system, say you're using GitHub Actions to generate things and you want to actually use the credentials that GitHub provides to authenticate with Package Manager and trust those credentials rather than having to go through separate workflows to generate and, you know, obtain new credentials just to access Package Manager.

And once you have that federation established there, you can execute really any API or CLI commands from those external systems. So things like being able to remotely publish packages built by your CI-CD pipeline directly into Package Manager at the end stage of that build process. You know, building automated workflows that do actions like, you know, say you have a workflow request to approve new packages to be added to your Package Manager repo and then have that automated process go and add those once you've finished the workflow. Configuring Git builders, managing package blocking rules. Pretty much everything you can do in Package Manager can be done remotely via these API commands as well.

So the nice part about the way we built this as well is that we can actually combine these authentication methods and you don't have to just pick one or the other to go there. So, for example, starting with you can leave some repositories open to public anonymous access. So for those that work, so basically those that continue to work the same way they always have with Package Manager. Include token access for access from legacy systems or tools to authenticated repositories. Integrate single sign-on for, so you can actually, you know, give user level access to those who want to have elevated privileges to Package Manager. And then unlimited federated identity providers for that seamless integration with external systems.

Big excitement about this. This is all available now in the latest release of Package Manager that just went out last week. It is, most of these features are the Package Manager advanced tier, though simple token based repository authentication is available at the enhanced tier. So right now on the client side, so browser and remote CLI login is available now. Python package tooling works really great with PIP and UV if you're using a Python authenticated repository. Coming soon, more deeper R client support, so I realized that, you know, Python is first tier. R is coming soon, mainly because, you know, R, the R engine itself is actually doesn't have any sort of built-in authentication mechanism. So we're having to build a lot more tooling around that to make it more seamless there. But we're working on integrations that will make it very seamless for this work with Workbench, Connect, Positron, as well as on the open source tooling side through integrations with like the Pack package to make that process simpler there, too.

Chronicle: usage insights for Posit Team

I'm going to shift gears now to another initiative we've been working on at Posit here that we're calling Chronicle. And so Chronicle is a new tool that we've developed to really help all of our Posit team customers understand their usage of the product to really gain insights in making sure that you're actually using it to the best value that you've intended.

So why did we build Chronicle? Well, many of you have told us of our commercial product customers that it's hard to get useful reports on usage. Data is scattered around. There's a combination of lots of different features around for doing that. But you really want to understand, again, what usage and value you're getting out of the products. So with Chronicle, we're really focusing on exploring that long-term usage. So it's not just what's happening today. What's the current state of my system? But really, how has that usage changed, for example, over the past year? And is my usage growing? Are we actually seeing a lot more users? Are we seeing a lot more use of certain applications in Connect and really just kind of getting a good sense, holistic sense of our long-term value?

And so what Chronicle does is we need to centralize and have a way to centralize all of that data across my Posit environments, whether that's a single Connect server or a whole multiple clusters and deployments across a dev test and production environment that you want to kind of see one kind of unified view of all that. And make it easy to view that data and build reports around it to understand.

So as a kind of a quick overview, imagine you have your Workbench and Connect environments here. For each of those, all you need to do is install what we call a Chronicle agent, which is just a small telemetry agent that essentially periodically scrapes using the APIs and metrics endpoints from the products to gather data and collect that periodically and then sends it to a central Chronicle server. And so Chronicle server just is a single server that just needs network access from all of the servers you want to supply data from in your environment. And it takes all that data in, collects it, organizes it and manages it so into a much more tabular format that makes it really more well-suited for building reports and analyzing and really trying to dig into the data more like many data scientists would want to do.

So that data goes into a data store that is just really ultimately ends up as a collection of Parquet files that you can use either within Workbench, Connect, or even any other environment that you want to that has the capability of the Apache Parquet format there. And then from that, the reports are there. We are working on providing some reporting outside of the box with that there, but also everything from building your own custom reporting tooling around that as well as we like to do these days. Hook it up to your favorite LLM to crawl the data and spit out more interesting data there as well.

So a couple questions. What happens to my data? We saw the agents in the scary world there. No, all your data stays inside your environment. We're not here. This is not a telemetry agent that Posit is collecting and reading all of your data. It stays in that data store that you decide whether that's in a local volume or an AWS S3. And then you get to choose who has access to it and what they have access to. Best question, how much does it cost? Absolutely nothing. This is the tool that we are providing to all of our existing Posit team customers. We want to actually make sure that you are getting the value out of the products that you do buy. And so there's no licensing fees. No additional license hoops to jump through. It's available to all of you who have any of those positive products today.

Very easy to install. Most of our customers who have tried it out such far have all been able to get up and running in an hour. Obviously, depending on how many servers you're deploying it to, you have to get the agents deployed to. But in general, really straightforward. We focused entirely on making this minimal configuration and minimal pain to install. And best of all, as I mentioned earlier, you're not limited to the reports we provide. For those who really want to dig into it, the data is there. We scrape a lot of data out of all of the products. We'll work with Sync Connect. Package Manager will be coming later, early next year. And we'll be able to get you access to everything you need to use the data how you want to use it.

So I encourage you to check out our Getting Started Guide. URL's there or the QR code. And with that, I will hand it off to Kara Woo, who's going to talk more about the new exciting features in Connect.

Connect containerization

All right. Hi, everyone. I am Kara Woo. I'm a software engineer on the Connect team. And I'm really, really excited to be here today to talk about some of the new features that we have been working on. Particularly because they are features that I would have loved to have had in a past life when I was a Connect customer.

And that feature set that I'm excited to talk to you about today is the ability of Connect to build container images within Connect itself. So it's 2025. And in 2025, people want to be able to run their software in containers. There are a lot of benefits to doing this. One is stability. Because container images are static. Standalone artifacts. They can continue to run even long after package versions have gone out of support. Or things have changed on your system. You can be sure that your container image is doing the exact same thing that it was. They're also important for repeatability. Because you can know, again, that what you're running right now is exactly the same as what you ran five years ago. They also provide traceability. Because as a standalone artifact, you can actually inspect the contents of the container image to see exactly what was run and sort of debug things at a deeper level. And lastly, if you're an organization that is already using containers for other services or other products that you build, then you may have useful tooling already around containers such as security vulnerability scanning or other tools that hook into the very robust container ecosystem now.

But can I get a show of hands of who in this room loves building and maintaining Docker images? Okay. So we have three weirdos that the rest of you can know to watch out for as you go through the conference. No. So building and maintaining Docker images can be a huge pain and very onerous. And so what we're really excited to share is that soon Connect will be able to build container images of your deployed content for you with no Docker image required.

So as you may know, Connect already has the ability to run content in containers with off host execution. And so I'll explain a little bit about how that works today and how this feature sort of takes it to the next level. So if I've got a Shiny application running in Connect like this one, in order to get my code that I have written on my local laptop to Connect, I deploy a bundle of my source code and a manifest that describes my environment to Connect. And Connect keeps track of those bundles so that it can sort of restore my environment, run the content, and also roll back to previous versions if needed.

So this is the view in Connect of the bundles of this application that I have deployed. When I'm running off host execution, I will specify an execution environment that I want to use, which has a container image that has R and Python installed into it. And I know that if I deploy my bundle of content, it's going to go up to Connect and something's going to happen in between that takes that execution environment image and produces my application running in Connect. But what happens in the middle is a little bit of a black box. And there's a lot of moving pieces to it, but at a high level, the important thing is that the package cache, so my R package dependencies or my Python package dependencies, as well as my content bundle that I have deployed, basically get mounted to a container, meaning that they are made available, but they are not part of the image that is being launched. So we take the execution environment image, we sort of attach all of this stuff, and we get a container that's running in Connect with my content. And this works really well, but it is ephemeral, because all of this stuff has to happen to make it work together, and if I come back a couple months from now, maybe the execution environments that are configured in my Connect instance have changed, and so things may just be slightly different.

So with the new containerization feature that we've been working on, Connect will actually take that execution environment image and build an entirely new image that contains the dependencies and the content bundle within it, and push it to a container registry that I as a Connect customer own and maintain, and then Connect will be able to run that image more or less as is to serve the content, and what is really nice about this is that that is then very self-contained, because all of the dependencies on the content are within the image, and so execution environments may come and go, but that content is going to stay stable.

that is then very self-contained, because all of the dependencies on the content are within the image, and so execution environments may come and go, but that content is going to stay stable.

So I'm going to now risk my life with a live demo of how all this is going to work. So here's that same application that's running in Connect, and if I go over to my bundles, I can see I've got several bundles here deployed. The user experience, what a user's going to do to containerize a bundle is select the bundle that they want to containerize, click containerize, and then it's going to prompt with a confirmation, like are you sure that you want to do this, and that's because this containerization operation cannot be undone. It's a best practice to keep these images sort of immutable and static, so I will accept this and confirm, and then that is going to kick off a container build, and it's running through a bunch of logs, it's going to install my package dependencies, my content, and build all of the things that are needed for this content image to run, and then it is going to push it to a container registry, and at the end, it launches my Python Shiny application, and you can see in the bundle UI that now this bundle is showing up as containerized.

Everything else about this application looks exactly the same, but it is running that containerized bundle now instead of using the execution environment image, and I can prove that by removing all of the execution environments from the system and showing that this content still continues to run. So I have one execution environment here. I'm going to go ahead and delete it. And now I'll get a message saying that uncontainerized content is not going to be able to run on the system, but if I go back to my application, it still runs fine.

Now as I mentioned, these images are getting pushed to a container registry. For security reasons, we're going to recommend that that registry only be accessible from Connect because it is not going to have the knowledge of permissions of who can access what content that Connect does, but at the end of the day, it is just a container registry, and so it is possible to pull that image and run it locally as you would any other.

So some of you at this point may be looking at all of this and wondering, but wait, where's the Dockerfile? Where is the Dockerfile that defines the image that was built? And the answer is there is not one. For those of you who are less familiar with this, this is what a Dockerfile would sort of look like. You would need a file like this to define a container image to say what sort of base image you're building from, what packages you need to install, what system dependencies you need to copy over the source code of your application, and to specify what command launches your application when the container runs. This is a very simplified example, but these can get really long and complicated, and we did not feel that a publisher in Connect would want to or should have to manage these Dockerfiles. And so there are no Dockerfiles. We have developed this containerization feature so that we are using cloud native build packs to generate container images just from the contents of the bundle and a little bit of metadata that Connect adds to it. So no Dockerfile required. Instead we're using this open source specification that can build container images based off of application source code.

So the way this works is that we will run all of these content bundles when you containerize a bundle. We will run it through what's called a detect phase, which is going to determine what build packs are applicable to build an image. So if you're deploying a Python application, there will be some Python version information in that bundle. If there may be a requirements.txt or if you're doing an R application, there will be R version information. There may be an renv lock file. You might have both if you're doing a shiny app that uses reticulate. So we run through this detect phase to determine which build packs are applicable. And then we go through a build phase that runs the appropriate build packs for the content that is being built. So if there's Python content, then it's going to configure Python. It's going to pip install the requirements. If there's R content, it's going to configure R. It's going to install renv. It's going to have renv install the packages. And so based on the bundle itself, we can actually build these images without having to have an external file that sort of defines how they need to be built. So we've written our own custom build packs that can do this for the content types that we're supporting on Connect.

And all of the system dependencies are going to come from the execution environment that's being used for this content. So if you're able to run the content on Connect in off host execution uncontainerized, then it should be able to work the same in the containerized version. So we're really excited about the build pack approach because what you get is a normal OCI compliant container image with no extra work required from the end user of Connect.

Now we know that there are those of you three container nerds out there who are already building your own images and you might want to run those on Connect. It's not a feature that we're supporting right off the bat, but we do hope to support it someday. And so this work lays some of the foundation necessary to make that possible and to make a lot of other really advanced production workflows possible, such as having a pipeline to use the Connect API to search for an appropriate execution environment, create an execution environment if it doesn't exist, and then push your content and containerize it using that execution environment all through the API that Connect provides. Another workflow that might prove really beneficial for some organizations is to have a CI pipeline that pushes content, containerizes it without activating it, and then uses that container to run unit tests or other checks before then using the Connect API to activate the containerized bundle.

So a lot of really exciting things that we're hoping to do in the future with this. And this is so new that it is not yet released. It is not yet broadly available in Connect. But we are actively working on it, and we're looking to talk to people who have a use case for this or who are interested in learning more. So if you are, if that is you, if you would like to talk more about this and maybe try it out, then please get in touch with us. There's a Google survey that you can fill out to get in contact. I will also be at the booth tomorrow afternoon, and I'm around, I mean, today and tomorrow. So we'd love to talk to anyone who's interested. Thank you.

Connect Cloud and hosted publishing

My name is Alex Chisholm, and now my microphone is working so I can change the volume of that. I'm a product manager on the platform, the hosted platform team. So I work on Posit Cloud and Shinyapps.io and Connect Cloud. I want to shift gears a little bit, because what we've heard so far really revolves around one key point, that somewhere, someone needs to be both building and maintaining the infrastructure for you to do your data work. And we know that in some cases, this is either impossible, or at the best case, it might not be easy.

So about a year and a half ago, we started thinking if we were going to build a next generation online hosted publishing platform, what characteristics would we need to sort of deliver on that ease of use? We wanted to build upon what has happened with Shinyapps.io. So a lot of success there, especially for Shiny applications being put in the cloud relatively quickly. But we also wanted to build out the more robust deployment footprint of something like Connect.

And over this time, we've spoken with a lot of people and organizations from independent consultants, you know, one-person shops, all the way up to large tech companies. And I think you might be surprised, I was a little surprised, that a lot of these conversations have ended with like this notion or this theme that we don't want to build or maintain anything. We just want to find a simple way to take what we have created, if I'm a data scientist, and put that in front of somebody else, and then find a way to sort of push the boundaries of like the security footprint to see if this is okay for me to be able to operate in this environment. And this usually came down to three specific reasons. One is, you know, we don't have the priority. We don't have the time to be able to have somebody, you know, build something up for it or keep an eye on it. We might not have the budget or the perceived budget to have a dedicated engineer working on these things. And then in many cases, depending on the organization, they might not have the talent to be able to do it in the first place.

So we wanted to put together a platform that could help, you know, especially this use case, and keep iterating on features and functionality to make it more enterprise ready, if you will. So we want people to be able to get up and running in minutes. And this is possible today. I'm curious, in the audience, how many of you already have Workbench or Connect or Package Manager? Yeah, so about 60% of the room. And my guess might be like, for those that are still evaluating, that other question might come to mind. Is there an easier way for me to do this? And I think the answer to yes or no is going to depend on your use case. But right now, we support, you know, beyond just Shiny, we support what we think are like the core data science frameworks, you know, of the day. So Shiny and Quarto and R Markdown on the Python side, Streamlit and Dash and Bokeh. And we make it incredibly quick to create your account, to go out there and deploy something.

If you are part of a workflow that requires public dissemination, we want people to be able to get to your content as well. This is a public profile on the free individual accounts where I can say, hey, look what I can make, you know, world or community or potential employer. And then we wanted to give some of those same controls that you might be used to if you have been using Posit Connect over the years. And you know, this is a slow process. We're going to keep adding on features and functionality to get there. But we think that this is becoming, you know, every month a more robust tool for some people.

The original workflow for those who have played around, you probably remember hearing that it was based entirely on GitHub. So you had to put your R or your Python code in GitHub with a manifest.json file or requirements.txt file. And then once you did that, our engineers did an incredible job optimizing this deployment path. And it takes for both R and Python now between 5 and 10 seconds for most deployments, you know, maybe a little bit quicker. And that's the first deployment. But then when people go back and look at your thing later, you might be used to on Shinyapps.io waiting a while for that app to come back on. Again, we think it's a matter of seconds now on Connect Cloud. But we also know that not everybody is on GitHub. Some people use different versioning. Others are not quite there yet in their software engineering slash data scientist mix. So we've added on