
Reproducible data science with webR and Shinylive | George Stagg | Posit
A fundamental principle of the scientific method is peer review and independent verification of results. Good science depends on transparency and reproducibility. However, in a recent study a substantial 74% of research code failed to run without errors, often caused by diverse computing environments. This talk will discuss the principles of numerical reproducibility in research and show how software can be pinned to specific versions and self-contained as a universal binary package using WebAssembly. This ensures seamless reproducibility on any machine equipped with a modern web browser and, using tools such as Shinylive, could provide a new way for researchers to share results with the community. webR demo website: https://webr.r-wasm.org/v0.3.2/ Shinylive examples: https://shinylive.io/r/ https://shinylive.io/py/ Documentation: https://docs.r-wasm.org/webr/v0.3.2/ https://github.com/posit-dev/shinylive https://github.com/quarto-ext/shinylive
image: thumbnail.jpg
Transcript#
This transcript was generated automatically and may contain errors.
Hi, I'm George, I'm a software engineer at Posit, and today I'm going to talk to you about reproducible data science with webR and Shinylive. Before I actually get into the webR and the Shinylive stuff, I do want to talk a little bit about reproducible data science, so that's where I'll start.
One of the really important principles of the scientific method in general is the idea of peer review and verification, the idea that when you do some kind of analysis or scientific work, you can take that work, give it to someone else, and they can redo what you did based on your data and come to the same conclusions. With this, we can verify each other's work, and that makes really good science.
And because of this reason, researchers are increasingly publishing their data, as well as the infrastructure and their source code for software that they use to build their analyses. In fact, in some places, this is actually becoming an institutional or regulatory requirement. In some universities, for example, you may be given funding and told, okay, well, any outputs you produce from this funding must be open access, so anyone can download those results.
Or if you're in a regulatory environment like healthcare or pharma, governments may say to you, okay, well, you have to make your outputs accessible to everyone. And I think that makes sense when you have something like a government, that their outputs should be viewable by the people who are under that government.
So this idea of open science is, there's a whole bunch of sections that's shown in this little diagram here, and you can read more about it at the links on the screen. But really what I'm going to talk about is the idea of this open research data and open software.
And the people who care about this kind of thing, they're taking inspiration and lessons from the world of free and open source software, which has been around for maybe 40 or 50 years now. All those ideas of free and open source software can be applied to the idea of open scientific knowledge.
And this takes us to reproducibility, the idea that if you have some kind of data analysis, it should be created in such a way that other researchers can come to the same conclusions as you.
The reproducibility problem
Now that's nice in principle, but in practice, things don't go so nicely. There was a paper published in 2022 that took a bunch of research outputs, ones that involved R code, and they take the look at the data in the R code and they try to rerun those analyses.
And unfortunately, what they found was that 74% of R files included with open research failed to run on the first try. They would error or some other problem would occur.
And I remember a colleague of mine, Winston, showed me this paper, and I remember being really quite disappointed when I saw it. I'm a former academic myself, and I remember thinking that surely we can do better than that. Who are we to say that we find truth when 74% of R code fails to run? It was really quite upsetting.
Who are we to say that we find truth when 74% of R code fails to run?
This is really important, this idea that when we write code, it should be runnable anywhere. And for some research codes, like for some research projects, these trials may be going on for years. So it's good that any code that we do write should be able to run in the same way for years to come.
So what do I mean when I say the word reproducible? Now, there's no real hard definition for when a piece of software is reproducible or a piece of analysis is reproducible. There's just many guidelines. One definition that I like is that something's reproducible if anyone can go and rerun and verify the result of some computational procedure.
As a follow-on to that, you also get the ability to modify or extend that procedure. Perhaps you go and collect more data, or perhaps there is extended software that you've written or you can use to extend that piece of work and gain new insights. Now, you'll notice with that definition that there's a level of subjectivity, there's a word easily in there. And what's easy for some people is not going to be easy for other people. I'm a software engineer and I might find quite technical things very easy, while a working researcher may not have that knowledge and experience of really working deep with technical software or procedures. And they might find certain reproducibility methods more difficult to handle.
And because of that, there are these levels of reproducibility where some things are quite easily done that can make your software run in more places, while there are really deep technical ways to ensure that you have a reproducible environment everywhere that are much harder to do.
Levels of reproducibility
There are resources to help you with this. Building reproducible analytical pipelines with R is a good starter for anyone who is wanting to build these reproducible processes in R itself. And there's also the carpentries. These are really, really great people. They'll go into institutions like universities and they'll teach good software engineering and teach how to make software that's reproducible and high quality.
What I'm going to do is I'm going to start from a very light touch, first level of reproducibility to tell you what that looks like. And then I'll sort of get more into the weeds about where these things can kind of go wrong and sort of more technical ways to ensure that your software will run for years to come.
And then once I've talked about that, I'll talk about one of the methods of reproducibility with Shiny. And that's Shiny Live and WebAssembly. And that'll be at the end. So the first step, if you really want to make sure that your code is reproducible is realize that there's no guarantee that any script that you run will run successfully anywhere else.
What you're trying to avoid is the also common, it works on my machine. That's kind of the first hump that you have to get over is that mindset. You'll find that the things you have to do to get started here aren't too difficult. It's stuff that we probably all as R or Python programmers should be doing anyway. It's basically just paying attention to where your working directory is, where your files are stored and what data sets are required to run your script and where they live on disk. Bearing in mind, if you have multiple scripts, what order should you run those scripts in? And just sort of avoiding programming errors, just making sure that if you run your script from top to bottom, it runs without any errors. And like I say, anyone can do these things. They're not technically difficult, they're just sort of paying attention to what you're doing.
And even this small effort makes a big difference. In the study I talked about earlier, they found that when they applied these very simple sort of checks onto the R code, it went from 74% failing to just 56% failing. And that is a big jump for stuff that we should all be doing anyway.
The next level, once you've written a piece of code, it's a good solid piece of code, you're checking for errors, you've made sure you know where your scripts live, is sort of what I called at the language level. And now we're starting to talk about just sort of good software engineering. The idea of making sure that your code is kept in something like Git for source code management, programming defensively. So if something could fail, checking for such a failure condition and handling that. And also things like organizing your software into modules or packages and avoiding things like deprecated functionality if you're using packages. And most importantly, at least in my opinion, documentation and tests, really making sure that your software is doing what you think it's doing.
Now this stuff's a little bit harder, but there are packages to help you do this. There's a bunch of packages in R that I've shown on the screen there that can help you with this sort of language level software engineering. And I'm sure there are similar packages in Python, but really there's no magic bullet. It is just sort of making sure that somebody on your team who is writing software has those good software engineering skills.
And again, there are sort of societies and groups to make this better. So the Society of Research Software Engineering is a society that's certainly been up and coming in the UK universities over the last few years. And there is a US version as well. And what they will do is they'll go into places like universities and they're really advocating for this kind of work, this good software engineering in research. And they're really trying to push for the idea that a research software engineer should be a valid career path for someone who is doing kind of research and wants to make sure that the research software is open and high quality.
Managing software environments
Okay, so even if you've done all of that, at some point you'll probably take your code and you'll want to run it somewhere else. The example I use in sort of the academic setting is if you have some kind of data analysis and you want to send it to someone for peer review. This kind of thing happens in science all of the time. Now, unfortunately, if you do have software, everyone's computational environment is going to be slightly different. They're going to have different versions of software or packages installed, for example.
Here on the left, I've shown some examples of real situations I've seen where different versions of certain packages have caused problems with somebody's code running on someone else's machine, whether it's a version of R or Python. And also I saw a case once where the version of Node caused a problem, which is a JavaScript engine. But even so, even if you have the same version of R installed on a machine, you might have different packages. And again, there are tools to help with this. Rig and pyenv are R and Python tools respectively for managing the versions of R and Python on your machine. And renv and venv are packages that will manage the versions of packages for your projects for, again, R and Python. They can kind of take a snapshot of the versions of packages installed in your machine and reproduce that environment on somebody else's machine so that you know they've got the same versions of the packages on their machine. So when they run your code, it's going to give the same answer.
In addition to that, I've also added p3m here, which is the sort of short name for Posit Package Manager. And I do want to shout this out specifically because this is a great tool for building reproducible R scripts and Python scripts. Because what it is, is it's a version of the CRAN and PyPy package managers that you can actually, if you go into the setup and run through the options, you can actually pick a specific date in time. And if you set that up as your package repository in R or Python, what you'll find is that the versions of the packages from the repository will be at that point in time. So you know that if you come back to a project in a year's time, if you use this repository and select the date that project was written, those packages installed will be exactly the same versions. So that's really, really useful if you're trying to freeze a point in time for your particular project.
Saying that, even if you have the same versions of packages installed, again, there's another layer, which is sort of system libraries. A lot of packages and software itself like R will actually call out to system software. And you can have different versions of system software installed on machines. So you could have different implementations of Blas and Lepak, for example, depending on whether you're running a Linux machine or Windows machine. And in those cases, you can get slight differences in results because of that.
What you'll notice is as I'm going sort of deeper into the layers, you'll find that you get closer and closer to full reproducibility. We're reaching the point now with system libraries where you might find small variations in results, but generally the overall result should be pretty much the same by this point.
However, if you find that it really is important that a certain system library has to be the same for your results to be consistent, there are ways to do that. There are tools like virtual machines and Docker, which are popular in the cloud space for reproducing system environments. And there's also a system there, Nix, which is built to, again, reproduce a system environment. But Nix is really interesting because it's sort of you build your environment using code. And then because of the way Nix works, it can use cryptographic hashes to actually confirm to yourself that you really do have the same version of a certain piece of source code as somebody else running Nix. It's a very exact way of making sure that your source code and environments match.
But there are downsides. Anyone who's built the Tidyverse from scratch will know that building tools from source, if there are many of them, can take a very long time. And the more you have to build from source and the more you have to reproduce in your environment, the longer this will take, especially without things like binary caching. And the other downside is that this can be quite difficult to use, particularly very advanced tools like Nix can be hard to use if you haven't seen it before. And there is steep learning curves.
Floating point determinism
Even with a fully reproducible environment from source code, there is one more level that I want to talk about. This one will affect you only very rarely. And that's that when you take a piece of source code and you compile it for a computer, it compiles it into a machine-specific language. So if you have something like a Windows desktop workstation or a Mac laptop and something like a phone, each of these things will run slightly different versions of machine language. And that means even if you have exactly the same source code, if you compile it for these different types of machines, you can get different answers. 99% of the time, this doesn't really affect you, but it can. It can happen.
And here's an example of such a thing. Here's an R console. And what I've done here is I'm inverting a matrix essentially. And you'll see that if you look at the numbers in this example, they're reasonable numbers. They're not too big. They're not tiny. They're just sort of reasonably sized numbers. But in fact, this example, those numbers have been specially chosen to demonstrate this problem. In this case, there's two different operating systems running those consoles. And you'll see that on the left, the result ends in 3015. And on the right, it ends in 3046. And you do get slightly different answers with this example.
The reason for this, by the way, is something called floating point determinism. So you can demonstrate this easier with this example. Basically, the problem comes down to the fact that when you're doing floating point computations on a computer, the order of operations matter. So when your source code is compiled into machine language, the exact order that the computer chooses to do things makes a very small difference to the answers. So here, I'm adding 0.1 plus 0.2, and then 0.3. And I get an answer, 0.6 stuff. And here, I'm adding 0.2 plus 0.3, and then 0.1. And I get a slightly different answer. If you want to try this yourself in R, do make sure to turn on this option, which tells R to print more digits when it's answering results.
So the important thing to note here is that really neither of these are perfectly exact. And no computer is perfectly exact when it's calculating floating point or non-integer calculations. And the exact answer that you get can depend on the system you're running on, even if everything else is the same.
There's some more information about that in my slides, if you do want to know more about that. I won't go into any more detail about this, other than to say that this effect exists. But I do want to just point out that even though these kinds of things are rare, they do happen. Here's an example in the NumPy GitHub repo, where someone asks, I'm calculating the hyperbolic tan of some function, of some number, and I get two different answers on Linux and Windows. And in this case, Linux is just that little bit more accurate. And they're asking, well, why is this? Is there anything that can be done? And one of the developers answers, in general, NumPy doesn't attempt to provide exactly the same results on different platforms. It's essentially impossible for floating point code.
WebAssembly as a solution
So if you really wanted to make your data analysis to the exact number reproducible anywhere, this is a problem. So we think, OK, well, is there a way that we can compile our software so that it's using the same machine language anywhere we take it? And the answer to that is yes. And that's what I'm going to talk about now, this idea of WebAssembly. And what WebAssembly is, is a portable binary code format. So what binary code format means here is the machine language I talked about. And what portable means is that it's the same machine language on any machine. And WebAssembly has really been designed so that if you compile something into WebAssembly language, it will run anywhere with a modern web browser. So even things like phones and tablets, Chromebooks, laptops, computers, they're all running the exact same machine language. And that really gives you very strong guarantees about the reproducibility of the results.
WebAssembly enables high performance applications on web pages. It's not the first implementation of this idea of a portable binary code format. There is an older thing called Java, which is probably the most popular example. But where WebAssembly really shines is that it's designed to work in modern web browsers through modern JavaScript APIs. And because of that, it's designed to be secure by default. There are very strong limits on what a WebAssembly application can actually do on your machine. It's very strongly containerized and sandboxed inside your web browser. And web browsers are designed really to be very secure systems because obviously you don't want any random website to be able to see your bank details in another tab, for example. Because of that, WebAssembly gains all of these security benefits essentially for free through the browser.
So I'm the lead developer of a project called WebR, which is a version of the R interpreter that's built for this WebAssembly system. And that allows you to execute R code directly inside your web browser. And importantly, without a supporting R server, which is traditionally how things like Shiny apps run. You can run WebR server-side too. I know there are some people who are running WebR in a node process so that they can run R code on servers they wouldn't otherwise have access to.
But today I'm just going to be talking about R in the web browser. In addition to the interpreter, WebR includes a JavaScript and TypeScript library so that you can integrate with the R session from JavaScript.
So here's an example of something doing just that. It's a WebR demo application. And if you go to the URL shown on the screen in my slides, you'll see this webpage come up. And what this is, it's a version of the R interpreter running in your web browser. So you can actually go and use R on machines that don't have R installed or any packages installed. You can just go to this website and use R. If you've seen something like RStudio before, this will be very familiar. It's sort of this four-pane data science look where you have an editor, a console, and plots.
And what I really like about this slide in particular, these are Quarto slides. They're web-based. And because things on the web plug in together very easily, what you find is that this isn't actually an image. It's a real session I can interact with. So I can really interact with an R session directly inside my web browser through these slides. In the editor, I could type some R code. I could do some analysis. Let's plot a histogram. So we'll do RNorm 1000. I can produce images. I can save those to my local disk. And I can even upload data files into this sort of file browser view, analyze them using R script, and produce outputs.
What's hopefully clear here is that the real entire cycle of basic data science is possible to do inside a web browser reproducibly without having to install any software in your machine. That's the real benefit of these web assembly systems, I think.
One of the things we found very popular with WebR is using it in teaching situations, too. So you can imagine here, if I was some kind of educator and I had some R code I was showing, one of the really nice things about having R code directly in the slides like this is that if someone asked me, oh, what if we change something? Like if I just change this code to get a slightly different output, I can rerun that code and the output updates live. This is really great for educational uses, not just because you don't have to go to another window, open RStudio, reproduce your image, and then come back. But also, this is kind of reproducible by default. You kind of get it for free in these slides because the fact that you're seeing that image at all means that your entire R script had to run and it had to produce that image. It's not a picture like a traditional slide with a graph image would be, but it's a real live execution of R code shown on the screen. So it has to be reproducible for this to actually show anything at all.
Shinylive and WebAssembly
And that takes me on to Shiny Live. So what this is, is a system for Shiny for R and Shiny for Python that allows you to run these reproducible applications in Shiny inside your web browser without the need for a computational server. So I'll spend the rest of the time in this talk showing you how Shiny Live works. And then I have some nice examples.
So a traditional Shiny app looks like this. You will have some kind of server and then the people who visit your app are the clients and they'll open the app in their web browsers. The important thing to take from this slide is that Shiny runs on the server and in particular that means that your server has to be able to run R or Python code. Now that puts limits on the kind of servers that you can use to host your Shiny app. So if you're something like a researcher who's produced some kind of scientific output and you want to share that output with the world or with a reviewer, you're going to have to set up a Shiny server.
And there are a few ways to do this. The simplest way is that when you run a Shiny app in something like RStudio, it runs a Shiny server in the background. And in that case, the client machine and the server machine are the same. But obviously that's not great once you actually want to distribute that app so that other people can view it. And in that case, you're going to start having to think about, okay, well now I have to manage some kind of Shiny server infrastructure. There are three ways to do this. There's an open source Shiny server and there's also a more paid for and enterprise ways to do this. So you might upload your Shiny app to shinyapps.io or if your institution has something like a Posit enterprise license, there are Posit tools and cloud tools that can make this easy to share Shiny app.
However, if you don't have access to those enterprise tools and you have something like three or four projects on the go and you've produced three or four Shiny apps that you want to put somewhere on the internet to share, and you don't want to have to manage these over time, then none of these solutions are really a hundred percent great for you. I mean, shinyapps.io is pretty close. You can host Shiny apps for free, but there are computational limits and there are limits on the numbers of apps. So at some point, costs are going to be involved.
One of the ways to get around this is that you could bundle your Shiny app and its source code and transfer that data to another machine. So you could send your data science output to someone and they run the Shiny app on their machine and they can view your output. Now that's useful for things like long-term archival or even something like peer review, where you're sending your app to a single other person or one or two other people to review. But the problem is it involves all of that reproducible workflow that I talked about in the first half of the talk. Now, all of those tools and methods to make sure that your output is reproducible needs to be taken care of by the person who's reviewing your Shiny app. And the problem is that the person viewing your Shiny app may not be a Shiny expert. They may not have used something like R or Python before.
So if something goes wrong because of the lack of reproducibility, they're not going to know how to fix it necessarily. So the idea of Shiny Live is, wouldn't it be great if we could run these Shiny apps locally on our machine without having to use a server, but without having to install any extra software or manage this stack of reproducible software.
The way it works is that when you have a Shiny Live app, the web server no longer has to run R or Python code. It just becomes a static web server. And that means it just serves files rather than running code. And then anyone who visits your Shiny app, Shiny itself loads on their machine locally inside the web browser. And in the case of Shiny Live through WebAssembly is able to run that Shiny app locally.
One of the nice things about this too, is that serving static files is very efficient. Serving static files on the web has been around a long time and servers are very good at it. And that means that because that R or Python code is not running on that static web server, it's running on the client, the load associated with having many users is shared over each of those users' machines. So there's no chance of overloading something like a Shiny server if you have lots of users, because they all are running a local version of that computation to produce your data science output.
There are lots of static web servers available. The one that people are probably most familiar with is GitHub Pages. You can serve static files for free on GitHub Pages, but there are others too, both paid and enterprise software is available to serve static files. And like I say, this is a very efficient thing to do. So that's why companies can provide static web services for free or for very cheap.
When you load Shiny Live, there's a URL on the screen there if you want to try this out yourself. This is a Shiny Live editor, and it's actually running a live Shiny app in my web browser right now. So I can play with the slider and see reactive results. But one thing that's different between this and the traditional Shiny deployment is that I can edit the code. We found this really useful for teaching. In particular, if someone hasn't seen Shiny before, they don't have to install R, they don't have to install Python, they don't have to download the Shiny package. They can just go to this website and they get an editor and they can change the source code and see the results immediately. And we have done this. We've actually run an introduction to Shiny sessions where users have used the Shiny Live web browser for the first time learning Shiny, and they didn't feel like they needed to install the software. They were perfectly happy to learn the basics directly inside the web browser.
If you do use Shiny Live, one of the things you might want to do is embed a Shiny application inside a wider document. And as I said, these slides are Quarto slides. Quarto can also produce long form documents. So you could write a data analysis describing the trial that you ran and describing what the data shows. And right in the middle of that, you can actually add a Shiny application directly into the source code of that document. If you do that, in this case, there's some slides. It gives you a slide with your Shiny app. And in the case of something like a website output on a Quarto document, it will just drop that Shiny Live app directly inside your long form document, which is really nice for interactive demonstrations of outputs.
You don't have to use Quarto though. If you have a Shiny app that already exists, you can install the Shiny Live R package. And with one line of code, this will take your Shiny app and convert it into a Shiny Live app. What this does is give you a bundle of files that's ready to transfer to another machine or to host on a static web service like GitHub. If you are sending this to someone and you give them this bundle of files, in theory, all you need to do to run that Shiny app is to run a static web server. And there's various ways to do this. If you have R installed, there's a one-liner that will give you a static server. But even if you're using Shiny for R, the person at the other end doesn't need to have R installed if they have other ways of starting a server. There's a one-liner in Python and also in Node. Any way that you can start a static web server will work. It doesn't have to be within R.
Packages and reproducibility in Shinylive
One of the questions I get asked is, what about packages? R has a very rich package ecosystem. One of the things we do for WebR is to compile as many of these packages as we can into WebAssembly. And we host a WebAssembly binary package repository at the URL there. Not all packages can be compiled for WebAssembly. There's a lot of restrictions in what you can do, particularly security restrictions. But we are compiling as many as we can. And so far, we have about 60% of CRAN packages available for WebR. So there's a good chance that if you use a bunch of packages in your data analysis, you'll either be able to load them directly from this repository, or you'll be able to find some other way to produce your result in WebAssembly.
One of the things about R packages is that they update and change often. And despite our best efforts, if we want to do something like create a data analysis script or something like a Shiny app, when those packages change, I mean, how often have you gone away for a year, come back to some project, and then all of a sudden you update all your packages and it doesn't run anymore? It does happen despite our best efforts. And one of the changes we've made with the next version of Shiny live is that WebAssembly R packages will be frozen, downloaded, and bundled with your app automatically. So in the current version, when you load a Shiny app, it will go and download the latest versions of packages from the repository. However, in new versions, those packages will be frozen in time. And that means that your apps will continue to work indefinitely. As long as web browsers support WebAssembly, then they'll be able to load those packages because those packages will not change. They'll be fixed and frozen. This is obviously really good for reproducibility because the less things change, the more reproducible your result is going to be.
This is obviously really good for reproducibility because the less things change, the more reproducible your result is going to be.
So here's an example of exactly that. So if I just play this video, here is a Shiny app that loads the dplyr package. And what I wanted to really just show here is that when you export this to WebAssembly, you can see that each of those packages that dplyr depends on is being downloaded. So the dplyr packages and the Shiny packages will be downloaded at the time of producing your Shiny live app. And like I say, that means that they'll stay the same. They won't change. So here's the output of that exact app. And I have my filter based on dplyr. And if for whatever reason, that code that produces that filter was to change, it wouldn't matter. This app would continue to work because that version of the package is now fixed and part of the bundle for that Shiny live app.
The R Consortium Submissions Working Group is an interesting application of all of these ideas. They're running a pilot at the moment based around producing Shiny apps in WebAssembly for clinical trial submissions to the FDA. So they're looking at the WebAssembly technology and the container technology, things like Docker and Podman, as ways of producing Shiny applications that are reproducible. So that if you're working in, say, the pharma space and you have some clinical trial output, you can send that output to the FDA in the form of a Shiny app that they can interact with. They can open it on their machine without having to install R and they can review that work. There's a good post on the Pharmaverse blog talking about the work they've been doing in this direction. So if you're interested in this, do go and read that. They're doing some really great work.
Do bear in mind that not all R packages will work under WebAssembly. We are trying to compile as many as we can within the restrictions of the WebAssembly security sandbox, but work is ongoing here. If you do see a package that you use and isn't available in the WebR binary repository, do feel free to go and open a GitHub issue on the WebR repo, and we'll take a look at that when we can. Also, if you have your own R packages, custom R packages can be compiled into WebAssembly, and we're currently working on building GitHub actions to make this easier. So for your own package, you'll be able to attach a GitHub action so that when you release your package, it will go and build it into WebAssembly. The same kind of tools are also available on R Universe if you use R Universe. This is still experimental and we need to do some documentation updates and clarify some things, but as time goes on, this should hopefully become much easier and you'll be able to use your own R packages as part of a ShinyLive app.
Also, bear in mind that the idea here is not that ShinyLive will replace a traditional Shiny deployment. There'll always be the cases where traditional Shiny servers are needed. One example that someone told me, which I thought was good, was that they use Shiny as a front end to submit jobs to high-performance computing clusters. Now, in that case, something like a laptop or a phone is not going to have the same kind of computational power as those Shiny servers. In those cases, a traditional Shiny deployment is always going to be required. Another reason you might need a traditional Shiny deployment is for browser security restrictions. Things like connecting to a database, for example, can't be done in ShinyLive at the moment because the web browser simply does not give you that level of access to the network. So if you have Shiny apps that do things like connect to a database and do some work on that data, in those cases, you'll want a more traditional Shiny app.
And the biggest thing to bear in mind when you're building a ShinyLive app is that there are no secrets. Anytime you write some R code or you have some kind of data in your ShinyLive app, that is available to the client. Even if on a normal Shiny deployment, that data would not be sent to the client, with a ShinyLive deployment, everything is sent to the client. So no secrets or tokens should be in your source code when you write a ShinyLive app. That's really, really important.
That's it. That's everything I want to talk about today. There's some links on the screen there if you want to learn more about WebR or ShinyLive. And do check it out on GitHub. And if you see any issues at all, just let us know through the issues tab. Thanks very much.

