
Gordon Shotwell & Tracy Teal - Build Simple and Scalable Apps with Shiny | PyData NYC 2023
www.pydata.org https://gshotwell.github.io/shiny-algorithm This talk explores the intuitive algorithm behind Shiny for Python and shows how it allows you to scale your apps from prototype to product. Shiny infers a reactive computation graph from your application and uses this graph to efficiently re-render components. This eliminates the need for data caching, state management, or callback functions which lets you build scalable applications quickly. PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R. PyData conferences aim to be accessible and community-driven, with novice to advanced level presentations. PyData tutorials and talks bring attendees the latest project features along with cutting-edge use cases. 00:00 Welcome! 00:10 Help us add time stamps or captions to this video! See the description for details. Want to help add timestamps to our YouTube videos to help with discoverability? Find out more here: https://github.com/numfocus/YouTubeVideoTimestamps
image: thumbnail.jpg
Transcript#
This transcript was generated automatically and may contain errors.
Thank you all so much for coming to this talk today. I'm going to be talking about Shiny, which is a Python web application framework. And I think sort of where I wanted to start this talk was to talk about sort of how data scientists kind of actually deliver value.
So my background is as a data scientist. I've been working there for about a year or so until I moved over to Plosset to do software engineering. And what I kind of always notice is that data science has kind of a last mile problem. You do a ton of work to gather data, to get access to that data, to find a place to analyze it, do the analysis, build the model. And then at the end of the day, you kind of have to deliver it. And that point of delivery is where your company starts getting value from your work.
But that point of delivery is really difficult. You often have to wait for, I think especially in the Python world, you don't have to wait for a quote-unquote real web developer to come in and build a product that is going to deliver your analysis to stakeholders. People actually pay your bills. This is really expensive. You know, as you saw in PyBio, which had a great case study, it's really expensive to hire somebody from outside your company, find web developers in your current company, get it on their roadmap. It takes forever to do this type of work by the nature of the frameworks that those people use.
And most importantly, it's often incorrect. In order to deliver a really good data science product, you need to have intuition about your data. And a generic web developer often doesn't have that. So they might present the data in such a way that it's confusing or inaccurate. And then you have to go through this whole process again.
Why data scientists can't just do it themselves
So why can't we just do it ourselves? I think the main reason why we can't do it ourselves is that whenever you're delivering one of these products, it's kind of like getting a puppy. You put it in front of people, they like it, they want more from it, and it starts to grow. So you need to have a framework which can grow with that problem. And most of the available application frameworks don't.
Just to give you a kind of example of this, when I was at SoQer, the data science was putting together a feature management API. The idea behind this was it was a central source where everyone in the company could get information about our machine learning features, where they came from, what they were called, how they fit together. It was developed by a data science team and was important. It was something that was going to be used by more or less the whole company. But it needed a front end.
So when we looked at all the tools that we had available to ourselves, none of them really fit. They were either kind of too hard for the data science team to use, or they were too limited for that product to grow where we thought it would grow. So we needed something which could grow. Streamlet looked pretty fragile in terms of re-executing everything all the time. When we looked at Dash, this restriction on having only stateless components, so you couldn't really share data between graphs or between graphs and plots, seemed pretty awkward. And none of us knew JavaScript, so we were kind of stuck. We ended up needing to basically wait, I think, nine months before we ended up getting the hire approved, hiring the person, onboarding that person, and building that application.
Shiny's design philosophy
So Shiny is designed for that problem. It's designed to grow. And this makes it a little bit difficult to talk about in a demo. Because I can't show you a big application that's going to grow in 48 minutes. And that's really where this framework shines. It's very simple to get started, but it has all the tools you need to build full products. It's all in Python. There's an R version. There's the historical version of Shiny. Shiny was rewritten last year in Python. It's easy enough to build a prototype, but it has the tools that you need to build a product.
But showing that growth is complicated. So what I'm going to do instead is I'm going to talk about Shiny's algorithm. And try to develop a little bit of an intuition about how Shiny actually does this work. What's special about it, and how it works.
A basic example app
So let's take a look at an example. So this is a three-page app. I'm just going to tell the story for each page. So it just shows a little bit of how these grow. So this is a kind of example of a basic training dashboard. You know, maybe I fit a model. I want to just expose it to stakeholders so that they can check their accounts that are going to be hitting this model. Make sure the model performs as well for that particular account as it does in general. So I have two little plots. It shows a model score and a little ROC curve. And the user can change what plot they want.
So what's interesting about Shiny is that when I update this metric, the only plot that actually fires or changes is this one. Nothing happens over here. But if I change the account, since both of them depend on that account slider, both change. So it's minimally re-rendering things. It's only re-rendering the parts of my application that need to change in response to a particular user action. But how does it do that?
So this is the full code that generates those plots. And one of the things, when you look at this code, if you have some background in web development, it's a little weird. There's no callbacks here. They're in a state. So how is it that Shiny is figuring out how to render things in this really efficient way? Another way of putting this is we told Shiny what to do. We said, here's how you make these plots. But we never told it when to do it. But somehow Shiny knows when to do it. And it more or less always gets that right.
How other frameworks handle updates
How do other frameworks accomplish this? So the bluntness approach is great, especially for simple applications, but pretty blunt for larger ones. Where it just re-renders everything. Just start from the beginning, re-render everything. So I don't need to figure out how things should update. I just re-render everything. And you can kind of get more into the weeds of state and caching later. But in the beginning, that's kind of how it works.
And most other frameworks use event handling. What event handling is, is you, the programmer, define when things should fire. When your renderers should fire. So you say, when this button changes, on click, on change, do this action. And it has some problems. I think the first one is you have to do it. So as a data scientist, that's a little much, I think. Doing it and doing it correctly is hard. It's pretty easy to get it wrong. If you fire callbacks in the wrong order, you update state in one part of your application and not another. You can get these kind of pernicious bugs where you show the user the wrong value. And importantly, it's really hard to tell when you get it wrong. Sometimes you need to kind of get into a particular state in testing to trigger that sort of value. So usually when event handling goes wrong, you don't get a failure. You get an incorrect value somewhere deep in your application.
My favorite example of this is Slack. So about every three weeks, I get an undismissible Slack notification that says, You have activity. Go deal with your activity. And I click and I don't have any activity. And I just know, deep in my heart, somewhere in the Slack event handling artifice, somebody in there has forgotten to update a state variable. Or she's forgotten to retrieve a state variable. And I'm the one that has to suffer for it.
My favorite example of this is Slack. So about every three weeks, I get an undismissible Slack notification that says, You have activity. Go deal with your activity. And I click and I don't have any activity. And I just know, deep in my heart, somewhere in the Slack event handling artifice, somebody in there has forgotten to update a state variable.
Shiny's reactive graph algorithm
So what's a better way? A better way is, you know, you can think about DAGs, right? So rectidacyclic graphs are great because you know that for any particular upstream change, which nodes need to change downstream, right? And that's wonderful. This is something you use in make files, Airflow, all over the place, right? It's a very good way of figuring out that problem of what needs to re-execute. But the problem is you still have to write this. If you write this wrong, your application is going to be wrong. So is there a way we can maybe avoid writing this at all?
So here's Shiny's strategy. Shiny's strategy is to infer all the relationships between your component from your application. And use that inference to build a computation graph. And use that computation graph to minimally re-execute your application.
So telling you that I'm doing this kind of magic inference, I would just suggest that that should create a little bit of suspicion, right? Because oftentimes you get these inferences and they're like 85%, right? And that's much worse than just not doing it at all, right? I don't want to have an inference if it's not rock-solid, right? So this has to work all the time. 100% of the time. Because if it misses one, then you're not going to be able to trust it. And you're going to have to do the event handling anyway. And if you can't trust that inference, it's not useful to you.
So one question then is, like, okay, how would we do this, right? Let's just start from scratch. Like, how would you do this? And one answer you might think of is, well, we could analyze the source code, right? So I have these inputs here. Those inputs are likely user inputs. So maybe I could read all the source code. I could parse it. Find all those inputs. And use those to draw that down. And you could sort of think, okay, that might work. It would be hard. But it might work. But it won't work. And the reason it won't work is because Shiny allows dynamic user interfaces.
So this is a to-do list. So if I, like, say buy groceries, I can add a bunch of these tasks. I can complete these tasks and clear them. And things go away, right? So at any given point in this application, the application's state depends on all the things that the user has done up until that point. And the reason we're doing that is that this particular application state is not in the source code. And it's very difficult, therefore, to use static code analysis to get that application state. So it's not going to work.
Instead, what Shiny does is runtime tracing. So at runtime, while your application is running, it just watches what all of the different components ask for. And it writes that down. It says, okay, you asked for this thing last time. I'm going to know that when that changes, I'm going to ask you to re-render. Because I remember that you asked it. I don't really care about what's happening in the code. I just know that you made a request for this thing. And that thing is a reactive thing. So I'm going to, like, just keep track of that. But what happens inside is not really my business.
So how does this kind of just walk you through a really basic example? So user asks for this output. So this is an output on the screen called text. It's the ID. Shiny then goes and finds the rendering function that matches that ID. It fires it. And when it fires the rendering function, you see there's this input call. So that's a call out to an input from the UI. And then Shiny goes and gets the value of that UI. It records all of those steps, right? And in a simplified way, you get this little graph. So you get one input to one output. Most of the rest of this talk is just going to be about these beautiful little mermaid graphs.
So let's just refresh our memory about what this looks like. So I have two inputs and two outputs, like the count input, the metric input, and then the model metrics plot and the distribution here. And when Shiny starts out, this is all it knows. It just knows that there are four things. Two of them are inputs and two of them are outputs. It has no idea about any of the relationships between them. And so it's like, okay, I don't have any idea what's going to happen. So let's just try to get one of these outputs. I'm going to pick the metric plot.
So it calculates the metric plot, and then the metric plot goes out and requests the selector and the account selector. Those are the two things it needs to generate its plot. Shiny does that and draws these two lines. And the distribution plot fires, and that only needs the account selector. So the model metric doesn't matter. And there we have our graph. No staticated code analysis is required.
So when the account changes, Shiny does something called invalidate. And what that means is it tells the downstream dependencies, you're not good anymore, you need to recalculate. So it invalidates those two outputs. And when things are invalidated, it just forgets everything that it knows about those relationships. Because, again, this is all happening at runtime. So while those relationships that was the relationships before might not be the relationship now when I recalculate. So it recalculates, and in this case it gets the same graph. So it gets these two inputs, tries to calculate the distribution, gets the account selector, and we're back.
If the metric changes, though, something different happens, right? Because the metric only has one child node. So that's the only one it invalidates, but the distribution plot doesn't need to invalidate, right? It didn't depend on it, right? So it's still good, it doesn't need to recalculate. We still forget the dependencies from the invalidated nodes, and then recalculate it. So go get the two values, and we're done.
So this is the kind of secret of how Shiny is able to let you build very performant applications with very low workload as a developer. I don't need to, as a developer, keep track of this graph, really, in any way. I need to tell Shiny, I need to declare to Shiny, this is how these things should be drawn. But then Shiny is able to infer from that, as that application runs, how those things should be re-rendered.
So this is the kind of secret of how Shiny is able to let you build very performant applications with very low workload as a developer. I don't need to, as a developer, keep track of this graph, really, in any way. I need to tell Shiny, I need to declare to Shiny, this is how these things should be drawn. But then Shiny is able to infer from that, as that application runs, how those things should be re-rendered. So I'm stepping one level up in abstraction. I'm not doing the cooking, I'm setting the menu. And this frees you from a lot of work as a web developer.
Dynamic reactive graphs
So this graph we showed, everything was the same. It came to the same conclusion, so it seems like a little bit of a lot of work, but it's forgetting everything. But Shiny actually can do graphs that do change, and this is pretty common. So here I have a really simple example where I have one slider. I have a button which tells Shiny which slider it should use, it should listen to. And when I change this slider, nothing changes. But if I want to change slider one, the text changes. And if I switch this, and I go to slider two, now slider one is ignored, and slider two responds.
And here's the code that's doing that rendering. And basically it's just a simple conditional, where if it's slider one, I'm returning one slider value, and otherwise I'm returning slider one. So again, Shiny knows nothing about this application, it just knows that there are three inputs and one output. It tries to calculate the text output. Initially, the first thing it needs is, like, I'm doing that conditional, which slider should I call in, go get the button's value. And then it knows to go get slider value, and we're finished, right? It never got to input slider two.
So then when slider two changes, nothing happens, right? And I just want to pause for a second and explain why that's correct. So when we ran that first function, it said it came to its conclusion without ever asking for the value of slider two. What that means is that for all possible values of slider two, this output is the same. And if that's true, that means that if slider two changes, we can just ignore it, right? For all possible values of that, the output ended up being the same.
So if slider one changes, this is kind of the same process where the text output invalidates, it goes and gets the button, the button's still slider one, so it changes to slider one. But if the buttons change, when it invalidates the text output and goes and gets the new value of the button, which it says, go look at slider two, this time it's going to get slider two, right? And we have a different graph. So that's kind of why we have this process of things changing, because the reactive graph is going to change over the course of that user's application. And again, for those dynamic UI elements, where you're adding and removing things, where things are changing based on other values, this is really important. Because you might actually add or remove nodes from this graph based on what has happened previously in that session.
Scaling with reactive calculations
So what's great about this pattern is that it scales. So every single Shiny app uses this exact pattern. There's no way of developing complex Shiny apps versus simple Shiny apps. They all fundamentally use this type of transparent reactivity. And you can kind of imagine how this scales, right? DAGs are very efficient at rendering. So if I have lots and lots of components, assuming that Shiny is going to be able to use this same process to put it into a DAG, it's going to be able to render those very complicated applications. This works for dynamic UI. And lastly, it's lazy. So because it's starting out at the bottom of that graph, where the user is asking for something, if something is hidden from the user, it never gets calculated until the user actually goes and asks for it.
So I'll give you a kind of example of this. So here I have these, like, three tabs with stuff happening on them. But the training dashboard is the only one that actually has the application that's happening, because the user hasn't asked for anything on these background tabs. So while Shiny has attached them to the DOM, they're not calculated.
So that's why we've kind of been working with shallow graphs. Like, everything is... all the inputs are being directly consumed by an output. And that's great. In these simple cases, it works really well. But it's pretty limited, right? Particularly, like, each of those rendering functions is kind of working like that. They're doing all of the work they need to calculate, and they're not really able to share common calculations. So there's kind of repetition. There might be repetition between them. And that kind of limits its efficiency.
So we have a concept called a reactive calculation. And what a reactive calculation does is it's basically a way of creating a function in this reactive paradigm. So it creates a calculation whose results can be used by other functions, either other reactive calculations or other renderers. It basically adds depth to a graph.
So let's take a look at this model monitoring. So this is an example of it growing. Everyone's happy with my model. I've deployed it. Now I need to keep track of it. And this is a production. There's lots of data coming in. So I need to have some way of basically, like, taking a sample and investigating it. So the idea here is that I'm able to specify a date range, create a sample size, and this is going to query a database to bring in my sample. But I can still look through these accounts in memory. So this will pull the data from all the accounts that I have. And so that accounting will be really fast, even if the sample data is a little slow.
So I want to query that database to get my sample. And then later on, I want to filter that sample in memory. I don't want to have to go back to the database. I want to have that filter in memory. I want to send that same data to the two different functions without doing the filter again and without doing the sampling again. I want to cache the results of all of those queries. I want to invalidate the cache whenever its upstream dependencies change. And I don't want to do any thinking or work. I don't want to have to worry about this at all. I'm a data scientist. I'm not good at that stuff. So I just want to have it handled formally.
And so this is, I think, one of the core things about Shiny, is that there are other Python application frameworks that can do some of these things, but none of them can do all of them. In particular, I think none of them can do 4, 5, and 6 together. So how does Shiny do 4, 5, and 6 together?
So we have this ReactiveCalcDecorator. And what that does is it caches its value. So whenever it runs, the first time it runs, it caches its value. So you can call it repeatedly. And it sticks that reactive calculation in the same reactive graph. So it discards the cache whenever it's upstream, whenever it's invalidated, and it tells its descendants that it's been invalidated, just like the inputs did.
So this is how you might code that example. So you see we have two reactives here. One is the sample data that handles that query step, querying the database. And then the second one is this filter data that does the account holder. And how you call these is you basically just call them as functions. So in the filter data, it's calling sample data. And then when I have this plot score, it does its calling filter data.
So I have these little hexagons to kind of indicate reactive calculations. But again, Shiny, when it starts, it doesn't know how these things are related. It just knows that there's three inputs, two reactive calculations, and two plots. And it starts from the bottom and just tries to calculate the model score. So it calculates the model score. Model score goes and says, I need that filter data. Filter data requires the account and the sample reactive calculation. That sample reactive calculation then runs, and it says, oh, I need the dates and the sample size in order to get a sample. It does that, passes the data down to model score, plot's produced. But when the API response calls, this just needs the filter data. It needs the exact same thing as model score. So nothing is calculated. No filtering is done, no sampling is done. It just goes and retrieves the cache.
So that's how the cache account works. When the validation works, if the account changes, it does the same thing. It tells its dependents you're invalid. So filter is invalid, it needs to recalculate. Filter goes and tells its children, hey, you're invalid too. And they forget everything that they knew about their previous dependencies. And the same process goes. Model scores runs, gets filtered. This time it doesn't take a fresh sample because the upstream of sample hasn't changed. And that's important because if you're doing these sampling operations, you want to take your sample once and interrogate that same sample in many places.
So if you're not doing this type of caching with sampling, you might end up with this place where whenever I change the account, I'm taking a fresh sample and getting an inaccurate intuition about it. Because it might be one sample shows one value, another sample shows another one, and I really want to investigate that same sample. An API response does the same thing as always does. When something further upstream changes, the whole graph gets invalidated. So when sample size changes, we do need to go get a new sample from the database. So sample invalidates, filter invalidates, plots invalidate, and we're back to our initial state not knowing anything about this application, and go out and get our same inputs and outputs. And an API response, again, just gets its cache validated.
So this lets you build up pretty arbitrarily complex applications because you can stack these reactive calculations as deep as you need to. And they're always going to do effectively the minimum amount of work that they need to in order to generate your application. So whenever you're developing Shiny apps, it's pretty important to stick to a pretty strict do-not-repeat-yourself rule, both because it keeps your code in one place, but also because by avoiding repetition with reactive calculations, you actually make your whole application much more efficient.
Q&A: caching and invalidation
How are you... So what criteria are you using to determine if data has been cached or not? Are you looking at the values that were retrieved by the input function? No, we're not. We're just using a graph. So we don't know... And this is also important because we're not caching that to disk or caching that to memory, like caching the whole big data frame. We're just pulling the object in memory and knowing that... The way we're determining whether it needs to change or not is basically did my parent nodes change?
Okay, so if I ask for count A, then B, then A, the whole thing triggers again? Yes. Because again, Shiny isn't like... So there are ways of doing that type of caching in Shiny, too, where you are sort of saying I want to just check the value. But the problem is that that check isn't free. So if I'm doing that hashing, I have to do a hash and a hash comparison, which if it's a giant dataset, might actually be more expensive than doing the filter twice. So you can do that, but it's not automatic. The automatic thing is that we're just going to hold the result of that filter calculation and determine based on the graph. Great question.
Event-driven patterns in Shiny
Okay, so reactivity is a great default. This idea of just letting you infer how things should work, manage state and caching, and validation for you. But it's not the only thing. Not everything fits this pattern perfectly. So sometimes you actually do want to specify I want this to happen when that happens. You do want to do a bit of unhandling. So an example is batching inputs or triggering a side effect. And event-driven programming is not all bad. The problem is when you use it for everything, you run into these mistakes.
So an example of this is, again, my application growing. So here maybe I've determined that my model isn't doing very well, so I need to do some data updating. And I have these little buttons where I'm just saying I'm reading something, and that looks like electronics. This is obviously not the thing that people dream of when they're doing data science school, but it turns out it's super important. And I won't bore you pretty much with this, but every time I change this, it's basically like writing an annotation to a CSV file. And that doesn't fit very well with this idea of a reactive graph. I don't really want the value. I want to trigger some sort of thing outside of my application. Update database, deploy model, something like that.
So we don't want this to react automatically. We want to specify the user what happens when that button is clicked. So we have this decorator called reactive effect. There's two ones that go together often, reactive effect and reactive event. So reactive effect indicates that this is a side effect, so I shouldn't return a value from that function, I just want the side effect to occur. And reactive event says what's the trigger for that event. And this can be inputs, it can also be reactive calculations. And the one key point here is I'm referring to this input as an object. This is a callable object, so when you want the value, you call it. But if you just want to say, pay attention to this thing whenever this thing changes, fire this function, that's a raw object.
And then we have this body of the function that defines what that calculation is. So this is how you would do something like a side effect, or you know, an API call, or something like that. Or a post-interim API, or like, something like that.
So reactive event can also be paired with renderers or reactive calcs. If you don't, for whatever reason, set Shiny's inference isn't working, you want to override it and say, an example is you have five inputs in a form, and you want them all to only go together with a bad button input, that would be a case where you would want to maybe have a reactive event on top of your reactive calc. It says, ignore all the other dependencies and just use this one. This is the only dependency you should use.
And the way I think about this is it's kind of like adding event-driven chocolate chips to the broader reactive cookie. So you don't want to do this all the time. Most of the work is reactivity, transparent reactivity, but you sometimes do want to have these little event-driven patterns in there. And one thing I have found, especially from people who are transitioning from an event-driven front-end framework, especially JavaScript, is they're kind of uncomfortable by how little work they have to do in Shiny, so they kind of start using reactive events everywhere. And I would sort of say that's a cold stop. You notice yourself doing that, just relax. You don't actually need to do that much work. It is normal to feel that way.
So yes, you want to try to rely on reactivity, but you need to override it for whatever reason. Actually, that's the way to go. And the last thing is that reactive events and reactive effects and reactive calculations, basically the intuition there is reactive effects are for side effects, reactive calculations are for values. So side effects are things that happen outside of the world of your application, like updating a database or writing to a CSV file, playing model, something like that. And reactive calculations are for values, like calculation, as it was right there in the name, fetching a data from a database, that's usually you want the value of that data, you don't want the act of fetching it for filtering the data frame.
Summary and closing
So in summary, Shiny creates very performant apps with very little work. And this is its killer feature. And as your application grows, that's more and more valuable. It's valuable at the beginning, but especially when you're managing larger, complicated applications, it's a lifesaver. This algorithm is very elegant, I think, but it's not magic. There's nothing that's not difficult to understand here, and there's nothing that you can't really develop an intuition about.
And then overall, I would say it's important that whatever framework you pick, that your users are guiding how your application looks and feels, not the limitations of the framework. You want to have a framework that gives you the flexibility that you need to really accomplish the things that your users want you to do. And choosing a framework that you grow is super important. So thank you.
I have a couple of minutes to take some questions. I think I'm about six minutes early, but I'll also be around the booth today or tomorrow, so if you want to come by and pause the booth and chat with me or the rest of the team, I'd be happy to do that.
Just to make sure I follow you. So that graph, does that basically get rebuilt every time one of the downstream inputs changes? Yes. So anytime something happens with a graph, parts of it get rebuilt. So the parts that are invalidated get rebuilt. The parts that are not invalidated, you know that those values are all good, so you don't need them.
So there's two, I would say, so the question is Shiny for R and Shiny for Python, do they have identical feature sets? So there's two parts to that. So one is the Shiny core library, and then the second one is the broader Shiny community. So I would say right now, the core library for Shiny for R and Shiny for Python are pretty close. The main features are markless state, some things like that, but in terms of the main features, they're pretty neat, they're very similar. The community is, if the newer package, the community is much smaller, so we don't quite have the same rich feature extensions. So we have a plan of continuing to bring the core library closer to it, but most of the stuff that we have now that's missing are things that new users don't use all that much, so our focus is more on improving the early experience of Shiny, which there will be some divergence, I think, of those two libraries, because they are just different discrete functions. But yeah, they're pretty close. And Shiny for Python also has most of the BSL components as well, so they're the same library.
How customizable is the front end? Like if you wanted to do styling? Yeah, so fully. So all of our inputs have just basically a CSS class, you can change those, most of them take CSS files directly, you can include arbitrary CSS or JavaScript on top of it. We also have some new documentation that's coming out in terms of extending, including JavaScript, new JavaScript components as part of your application, so that's also possible, where you can basically write your own component and have that component react to all the other components as well.
Can you talk a little bit about how you define the layout of that? That wasn't the main point of this talk. So basically the layout is defined, it's a little similar to Dash, so we have UI functions that you nest inside of each other, so that's basically the way, so if you have a sidebar, there's a function like UI.pageSidebar, so the UI is a module that has all of our user interface components, and basically you just kind of nest those together. That can be created however you like, so you can write it out, but for a lot of times people want to write functions that render UI things, like chunks of UI code or
