Posit Meetup | Ryan Derickson, VA | Collecting & Acting on Employee Engagement with Shiny
Led by Ryan Derickson, Veterans Affairs Shiny Executive Dashboards in VA 1:46 - Start of presentation 3:34 - The data used for the application 5:18 - Executives need to know how to help and what to watch out for (AES Exec Dashboard briefs ~2,000 people) 9:08 - Shiny app (Exec dashboard) demo 12:20 - Storytelling within the application 19:22 - Q&A: Is the url available? This is an internal app for VA behind our firewall, so external users aren't able to access 19:33 - Q&A: My employer would like their data source reference daily, is Shiny the recommended method? A: We have large single efforts (annual survey) rather than continuously updated data, but I've seen really effective use cases of that. It depends a lot on how the data is stored and that you can access it through API or database. 20:24 - Q&A: Is there any concern from IT regarding open source tools? A: There is inertia to overcome. You need executive sponsorship to be able to say why the ultimate outcome matters. IT cares about security and maintainability, not the details of the app. A leader who has sway with IT leadership can help them in seeing the value. In VA, there is a really detailed process to go through to get on our federal Azure space, which is slow but you do get there eventually. 21:51 - Q&A: What has been your experience with R and VA? 23:22 - Q&A: Are there limitations on the packages used? 23:44 - Employee engagement & burnout communicated through the app 29:01 - Q&A: How many people are using the app at one time? 30:15 - Q&A: Is the UI done with bslib or javascript and css? 30:49 - Q&A: Where can we learn more about the click and drill-down interface function? 41:20 - Q&A: How long did it take to get the app up and running? 43:22 - Q&A: What was the process for designing the UI? 45:38 - Q&A: What is the return on investment for the VA? 47:00 - Q&A: What would you recommend for learning Shiny? 50:27 - Q&A: For other public sector employees making the case for open source, is there other advice you have? Abstract: Data analysis and visualization is becoming increasingly accessible, yet the difficulty in translating data into action often remains persistent. VA has adopted an integrated approach to collecting & acting on employee engagement data, with a (robust) Shiny app as the centerpiece for the executive-focused portion of that approach. This talk will cover that approach, demonstrate the app, and discuss organizational and service improvements that have followed. Speaker Bio: Ryan Derickson leads VA's quantitative approach to improving employee engagement, primarily through feedback and related outcome data
image: thumbnail.jpg
Transcript#
This transcript was generated automatically and may contain errors.
Welcome to the RStudio Enterprise Community Meetup. I think it was a little delayed here, so let me start over again. Hi friends, thank you so much for joining us today. Welcome to the RStudio Enterprise Community Meetup. I'm Rachel calling in from a rainy Boston today.
If you've just joined now, feel free to say hi through the chat window and maybe where you're calling in from. If this is your first time joining one of these sessions, we do have them every Tuesday at noon Eastern time. And so I'll share a calendar of all the upcoming events in the chat in just a second here, where you can add whole calendar or individual events to your own calendar too.
This is a friendly and open meetup environment for teams to share different use cases with each other and teach lessons learned. Together, we're all dedicated to making this an inclusive and open environment for everyone, no matter your experience, your industry or background.
During the event, you can also ask questions on the platform that you are watching from. You're also able to ask anonymous questions as well through rstd.io slash meetup questions, which I'll put on the screen here in just a second too.
But with all that, welcome. Thank you so much for joining us. I would love to pull Ryan up here on stage and let me handle my little side panel here. Hi, Ryan. Ryan, thanks so much for joining us. Ryan leads Veterans Affairs quantitative approach to improving employee engagement, primarily through feedback and related outcome data. Thank you, Ryan, for sharing your experience.
Introduction and VA context
Absolutely. Thank you, Rachel. And thank you all for the time. So I'll talk about how we use Shiny and rstd.io Connect at the Department of Veterans Affairs for our executive dashboard project.
So as you probably know, VA's mission, which of course I'm here to talk about our work specifically, not VA more broadly. VA's mission is to provide healthcare and benefits and burial services for U.S. veterans. And the mission of my office of NCOD is to improve employee engagement in VA so that those services can be provided most effectively.
We do that in the way that's most relevant here through enterprise survey and data science services. So some context for that work is that VA is quite large, as you may know, 400,000 plus employees from Manila to San Juan to Maine, every kind of occupation you can think of, hundreds of hospitals and clinics and benefits offices and cemeteries.
So in contrast to that, my office is quite small. Our headcount is about 50. So on one hand, that's really exciting because we can have a very outsized impact and work in a very high leveraged way. But it also means we have to punch way above our weight class to have the kind of impact that we need to have. So some of the tools and processes I can talk about today are what help us do that.
The data: All-Employee Survey
So just a little bit of info about the data that go into the application I'll show you, and then we'll, of course, jump into the actual app. The primary data source from surveys that we use is something called the All-Employee Survey. And every year, VA administers the AES to every employee, very comprehensive. It covers everything from engagement to burnout to psychological safety, patient safety, specific modules for specific occupations. So there's between 70 and 120 questions depending on what you do.
And it's our primary data source for organizational health. And it's also one of the most important, maybe in some ways the most important, feedback mechanism that VA has for hearing about how we can provide better services to veterans. It gets a great response rate. So we have representative high-quality data, about 70% response rate this year. People feel like they can be honest because it's anonymous. And the data go lots of places into executives' performance plans and get reported in important ways.
But from NCOD's perspective, the most important purpose of the data is for workgroup-level improvement. So workgroup is just a group of five or 10 or 15 people who work together on a day-to-day basis.
Executives need to know how to help
So for that improvement to happen, first of all, of course, workgroups need to have their data and be able to discuss it and see what it's telling them and strategize about how to improve their day-to-day work life. But for purposes today at the executive level, of course, executives need to know what's going on. Basically, they need to know where there are hotspots, things to watch out for, any trends that may be concerning or noteworthy.
And so the app and the briefing process that we use helps get executives that level of data they need in a quick way and also in a way that's very durable and that they can use as a reference. So we brief every executive team in VA, which is about 300 teams, 2,000, probably closer to 2,500 individuals. So for example, at a hospital, the executive team would be the director, a few service chiefs, and maybe a few other people too. So the top four, five, six, seven people at every site.
And we brief them directly from the dashboard that I'll show you in just a second. So we used to use giant PowerPoints and big Excel files and those were great as references, but as you can imagine, not terribly compelling when you have the kind of data we have at the scale that we have it. So the dashboard's been a huge improvement for us.
Just a few final points about the app itself. Flat CSV files are our data source, so no fancy APIs or databases. And the displays are also pretty simple as you'll see. We mostly use bar charts, line graphs, and tables. And I note this here because if you're like me, you see some of the really sophisticated and cool apps that other people make that do bring in lots of other data sources and have lots of cool plugins and so on. And it's easy, at least for me, to think that you have to do that kind of stuff for it to be useful. So for what it's worth, we've gotten a lot of mileage out of pretty simple backends.
So for what it's worth, we've gotten a lot of mileage out of pretty simple backends.
The other thing you'll see that we put a lot of work into is for the interface to look relatively coherent, hopefully, and for it to be predictable. Because with the audience we have, it's just not tenable to expect 2,500 executives to learn some idiosyncratic interface or deal with fiddly UI issues. So it needs to be very predictable, very polished, and also pretty responsive because these briefings last for 60 minutes and that's a lot of time, but also it's not. So even if we had a few minutes of cumulative lag waiting for things to update or to calculate, that's a decent percentage of our total time. So those are some considerations that we've had to deal with while building this application.
Shiny app demo
I'll try to pause for questions periodically, but feel free to interrupt with any questions or points of elaboration you'd like me to make. So with that, I will change screens, hopefully.
I was just gonna say, I'll pop in here when there are some questions too, but just wanted to remind everyone that you can ask either on LinkedIn, if you're watching there, or on YouTube Live. Also, there's the Slido link for anonymous questions too.
And so one of the considerations that if you do this at scale, you'll run into is bandwidth. So we have a relatively large backend server, but when you have hundreds of people coming at the same time to it, it's gonna sometimes be slow. So hopefully we get a pretty good response now because actually the briefings are going on right now. So I came from one this morning, I've got one this afternoon. So there's a lot of traffic on this platform right now.
So the first thing you see when you go to the link that we give to all executives, and I'll just make it a full screen, VA's numbers come up by default. So one of the good things about doing it in this way is that it's really easy to be very transparent with all the data that we have because anybody in VA can pick data for any group in VA. So there's no kind of compartmentalizing the information. Everything is fully available for anyone to see.
The other thing that that makes it very easy to do is to go into deep dives on specific groups that may be of interest during a particular briefing. So if we're briefing one of the undersecretaries, for example, who are over lots of different facilities and they have a question about a specific facility, we can go right to that facility and this input and everything else updates with that facility's data. So it's a great way to be responsive in real time versus having to take questions and then come back with additional reports.
So I'll go to our demo page, which has fictitious data. So it's representative of what a facility would see, but the numbers are not real. So as I go through this, I'll show you what we have and I'll try to focus more on the app itself than the data points, which is kind of backwards from how the briefings would go, of course, but I'm happy to talk about any of it. So feel free to break in with questions on the app itself or the data.
So the first thing you may notice is that it doesn't look like the standard Shiny app that you get by default. It's a Bootstrap page running Bootstrap 5 with the bslib package. So that gives us a good bit of flexibility because then we can use Bootstrap's native row column layout to get exactly what we want.
Storytelling within the application
It's also, so you'll notice the picture, for example, in the top right. So for every facility, we find a story about that facility from, that's been published by that facility itself, that talks about some service that's been provided or some efforts that have been undertaken that are what the data we have tried to promote. So in the case here, this is the picture from the Chicago VA. And a lot of VAs do this, and it's really cool. They partner with local community service organizations, veterans organizations, and so on. And they'll do things like drive-through baby showers. So if you're a veteran and your family just had a baby and you need some help, you can show up and drive through basically a big tunnel where they load you up with everything that you need in your car.
And if you haven't seen the kind of impact that that kind of service and work can have, it's really, really effective. So we try to tie in what's actually going on on the ground to the data we have just to build a connection. There's a story that's linked behind the picture and I'll not click on it now just to save the back and forth, but the actual stories are linked here as well.
There's a video that we put behind a, it's an embedded YouTube video. It's in an iframe, and it just sets the tone for why we do the survey in the first place. It's not really about tracking a bunch of numbers. It's about changing how people experience their day-to-day work at VA.
And then throughout, you'll also see a red, yellow, green color heuristic by default to indicate when a score is good or okay, or not so great. Of course, some people can't see that, so it's easy to change to an alternate palette or to pick a custom palette that anybody can see.
The trick with something like this that I learned the hard way is you wanna bring in the color scheme as late in the reactive chain as you can so that if someone changes the color scheme, you're not refiring all the reactives that would do data manipulations that don't need to be redone. You wanna just re-skin the graphs, basically. And it's tedious to do that after you do it the other way, so you wanna make sure to do that early on.
So the first thing we show people are some key numbers that correspond to important things that VA leaders care about. The response rate, obviously, is important because you need to know how representative your data are. Data sharing and data use refer to the extent that people at that organization say they've seen their data and that it's been used to improve their day-to-day work life.
So if you do some back-of-the-envelope math, the VA score on this is 42%. So we're talking about 200,000 people who say that this specific survey has made improvements and has led to improvements in how they are able to provide their service to veterans. Everything from getting pharmacy medications turned around quicker from prescription to administration, lots of employee wellness activities, lots of governance to do with nursing groups. So the places that groups go once they start talking about how they feel about their work and what can be done better are usually really, really impactful. And that's why we try to push the data down as low in the organization as we can.
Best places to work is an overall job satisfaction measure. So each of these tiles have a color that corresponds to the valence of that score. And they also have a click event that shows a drill down in a few different ways. And this is one of the things we try to do that's consistent across the different data displays we have. There's a quick takeaway that you can get just at a glance, and then there's more that you can get one level down, sometimes two, if you really want to. So it's great for a briefing because you can kind of read the room and see where people want to spend their time.
So in this case, we have scores for the facility, its region, the entire healthcare sector of VA, and then up to VA overall, just for some context, for this year and for last year. So everything we show has some comparison data, and everything we show also has the change score since last year.
Peers shows how the site stacks up against other sites in its region. So these are other VA facilities in the same region as Boston. So people often want to know what rank they are, how they stack up. This gets those questions out of the way early on, so they can focus on the within-facility data that's usually more impactful.
Direct reports is one level down in the org chart. And I should say, too, all these tables and graphs are sorted by default. So there's a useful sort based on the scale. So you don't have to kind of orient yourself to where the high scores are, or the low scores. They filter automatically. And that's a small thing, but when you have so much data, the usability gain you get from that is really important.
The colors correspond to statistical testing of groups. So green is better than a comparison group or improved since last year. Red, of course, is the reverse. And here we can look back to 2018 to get the trajectory.
So each of these has that same ability. I'll spare you the details with going through each one, but that's the UI piece here. Each of these also has a breakout that corresponds to the org chart. So response rate's chosen by default, but I'll select best places to work. So every organization, as I mentioned, is mapped all the way down to the very small team level, five, 10, 15 people. So this is a very intuitive display for people, especially for executives, because it matches the schema of the organization that they already have.
The color, of course, corresponds to the value of the score. There's a scale up here with VA for reference. Again, these numbers, just to reiterate, these numbers are fictitious, but representative of the kind of variance that you see. So they stop at two levels down by default, just because hospitals are big places. But if you want to see everything, then you can uncheck that quick map toggle. And now there's a lot, because again, it's a big organization, but you can get a quick visual triage of the entire organization.
So you can imagine as a director, as a service chief, you can really start to see just quickly where some of the hotspots and bright spots are within your organization. The additional layer of detail here is accessed by clicking on the dot. So if I wanted to click on, I'm just picking a group at random here, I can see a drill down into their specific data and also a scrambled supervisor email here as mapped locally. Easy to follow up with people this way.
Q&A
I did see a question. I'm trying to watch the chat, but there are lots of questions. I saw a question about the URL of the site. This is internal to VA, so nobody outside of VA could access it. That's why there's no URL available.
I can help ask some of the questions too, Ryan. Yeah, please. Okay, awesome. Yeah, I was just popping in there. I didn't want to interrupt the flow though. One of the questions that people asked on Slido was, I'm new to R and RStudio and my employer would like their data source refreshed daily used in their scripts. Is Shiny the recommended method?
So we don't do a lot of that, honestly, because we have sort of large single efforts versus kind of continuous incoming data. I've seen really effective use cases for that. I think it depends a lot on the way the data are stored and to the extent that you can access it through some sort of API or database.
Another question that came up earlier was, is there any concern from IT regarding the use of R and Shiny as the government traditionally uses SaaS? Or is this true? So there is concern. There is inertia to overcome. That's been a process. You need, at least in my experience, you need executive sponsorship to be able to say why the ultimate outcome matters. Because a lot of IT departments aren't going to care about the details of this. They care about security. They care about maintainability. So if you can get somebody at a level that has sway with other IT leaders to make them care and see the value, then you can get more movement that way.
At least in VA, there's a really detailed process to go through to get approved to be an app and to be on, in our case, Azure. There's a federal Azure space that we're on. So we have to go through all that review, which is slow, but you do get there eventually.
As a follow-up to that, I love Hugh's comment in the chat. Hugh said, this is amazing. I work for VHA and this app is so much better than how we used to get AES results. You might need to help me on the acronyms here. But what has been your experience with R and VA?
Thank you for the feedback. It's been good, especially for local work. So using it on your desktop is not an issue. It takes an admin to install RStudio, but that's typically not a problem. To get on Azure and get connected and then get stood up that way is, I mentioned a pretty long process. It took a couple of years. There's a lot of contracting and IT and related functions that have to be on the same page. But I think it's doable. We certainly did it. It just did take a long time.
Can I ask you a follow-up to that? If somebody is listening now and starting that process off, what is something that you wish you knew about then or something that was really helpful to you through that process?
Something that is helpful that I wish I had done earlier is do what I suggest that people do and have an executive sponsor at a high level that comes in to clear some of the obstacles. It's like any bureaucracy, I think. There are a lot more people who can say no than can say yes. So you need to find that person who can really motivate people to work on this project on your behalf.
There are a lot more people who can say no than can say yes. So you need to find that person who can really motivate people to work on this project on your behalf.
Thank you. I'll save some of the other questions for a little bit later. So I'll let you get back to the talk. Thank you. I did see one that's easy to take. Limitations on VHA's Azure Space CRAN only or GitHub. Haven't run into any limitations yet. We try to stay with CRAN just for reliability reasons, but no issues there.
Engagement, burnout, and DE&I data
So next, we talk about employee engagement and burnout, and we do that in the same way and with the same kind of interface for both of those constructs, just, again, for kind of predictability reasons. So we talk about engagement in terms of the percent of the organization that's engaged versus mixed or kind of on the fence, you know, maybe engaged sometimes but not always versus disengaged. So the percents are easy to see here as a bar graph, and then there's always a change score toggle that's a plus minus.
The VA's dots are for reference again. So if you're the director of Boston or whatever facility, you can easily see where you fall against VA itself. And then for every bar graph, the one layer down here is you can click on the bar and the little cursor kind of turns into a hand to kind of cue you into that. You can click on the bar and see a drill down in a few different ways on that specific construct. So for disengagement here, for example, we see it by supervisory level from most disengaged to least.
By service, a lot of variance here. Again, fake data, the little banner at the top gives that away, but you do see, you know, 40, 50, sometimes more points of variance at the service level. Work group, again, those small teams of 5, 10, 15 people. And you see the most variance here because people who are in the same occupation don't all have the same supervisor. And so that's why there's usually more variance at the work group level. There's a supervisor effect, basically. And then info will just have background on that specific construct.
So this is nice because we've got, you know, maybe 15, 20 people doing these briefings, and some people are more or less familiar with some of the details of the measures. So it's a great reference for us, but also for our executive audience because we leave this app up all year, and there's a lot of traffic throughout the year. People come back to it. And so you can easily remind yourself what the details of that measure are.
For engagement and for burnout, we also have hotspot and bright spot maps. So these are like those maps you saw a minute ago, but now we pull specific groups out that have either a concerning score, in the case of a hotspot, or a good score in the case of a bright spot. So these are groups where disengagement is at least 50%. So you get that quick kind of triage, and again, capitalizing on the schema of the org chart that people are already familiar with. Same thing for bright spots, engagement of 75%. So in this case, for example, it's useful to see the org chart display because if you're the chief of social work, you can see you've got three groups that meet this threshold.
So burnout is next, and it's the same kind of display for predictability reasons. Here we talk about the percent of the organization that has zero, one, two, or three out of three symptoms of burnout at least once a week. So the more symptoms you have, of course, the more burned out you feel. So the bar graphs show the scores, the dots for reference for VA overall, the drill downs by clicking on the bar.
We have free text themes here as well. So in the survey that this data comes from, people can also leave free text comments, and we theme those and report them out verbatim as well. We bring in the themes here just as a kind of an idea of what's top of mind for people when they're prompted to say something about their workplace. So as you can probably imagine in healthcare, especially now, burnout is top of mind for a lot of folks. So you see that quantitatively and also qualitatively in the comments.
And then we have two questions, a lot of bright spots here in this fake set. We have two comments about COVID, or two questions about COVID rather. Extra professional stress because of COVID and extra personal stress because of COVID. And the scale is from none to extreme, so you can see the distribution there. And then I'll spare you the detail again, but you can click on the bar and see the same drill downs as you've seen so far.
Any questions before I jump to the next section? Yes, there are a few other questions coming in from YouTube and LinkedIn. One was, can you touch upon performance, giving multiple users using the app simultaneously? So we have a, it's a 16 core, 64 gig server, and we can handle about 100 people at the same time. That's across a few applications though, and some are more RAM heavy than this is, although this is still pretty heavy. There are huge data sets that go into this and we try to cut them down and do as much pre-work as we can, but there's no way around some of it. So we do okay with 100 at that size server.
You can play with the settings and connect how eager versus unwilling it is to spin up a new session for incoming users. So you can have more people piled onto the same process or you can have more processes. It's some experimentation. There's also shiny cannon. I think that's the search term you want to use to do some testing, but that's at least our experience.
Another question that was over on Slido that I see has been upvoted a few times was, was the UI all done with bslib or did you also incorporate JavaScript and CSS? It's, the base is bslib and there's a lot of custom CSS. Yeah. Not so much JavaScript. I think the only script I can think of offhand is what toggles it to be full screen to remove the browser noise at the top. So not a lot of JavaScript, but a lot of CSS.
And this question I think was shared in a few different places, but it was, can you either share code with us or give general guidance on where to learn the click on like drill down interface function? Yeah, the, so the, the maps, I'll start at the top. The maps are from, let me go back up here. These maps are using collapsible tree. It's like, that's the package name. So there's a, there's a shiny binding that gives you the input. So you can use that to generate a modal that has the, the information for the thing that was clicked on. So that's how to do it here.
For bar graphs, these are all done with Plotly and there's similarly, there's a, there's a binding for shiny, there's a click binding that you can use to reference the, for example, the row number of the dataset that corresponds to the bar that was clicked on. So it's a little finicky to set up, but once you get it, it's pretty reliable. That's, that's how we do it.
I see somebody had asked if the session will be available later. And yes, right after the session, the recording goes up at the same YouTube live link.
How could I integrate an R Markdown file which could export more than one interactive plot at once in a shiny application? I don't know. Haven't done it. Best of luck though. We can, we could try and find some other resources too, or if anybody has ideas, feel free to share them in the chat.
And then one other, sorry, one other question. Eugene asked to clarify, how do you serve the dashboard? Is it on RStudio Connect or? Yeah, we, so we have a Connect instance that's on Azure. And so we just get a link for this page and send it out to everybody.
I see Matthew had a question too about, it's anonymous, but it can be attributed to the manager. Yeah. So the intent of this, and this takes a lot of work and sort of institutional familiarity to actually make it stick, but we've been doing this for, you know, 15 years or so now. So we've got some of that. The purpose of this is purely developmental. The only thing that's in performance plans has to do with using the data, not about getting a specific score or anything like that. So yeah, by mapping it on the org chart, there is like, you can see who supervises what group, but virtually no one uses it for that purpose because the data wouldn't be useful then if we did that. Like as soon as you create a standard that's going to be mandated or you create a bar that has to be cleared, then people find ways to clear it, even if that may not be how they actually feel.
Priorities, turnover, and just culture
So the next section is a different kind of display, just a table format here for DE&I data. The cool thing here, though, is the comparisons we do, not really the display itself. So we've got four DE&I questions, and those are linked above, but we use those and also three overall experience indicators. Best places to work, engagement, and burnout. And for every demographic, we test the minority against the majority and then shade where the minority is lower than the majority. So if you see a lot of red, basically, it means that in this hypothetical organization, people who you see have red cells are reporting more discrepant experiences on the topics you see the red cells for than the majority categories for their demographic.
So for disability status, for example, people who said they do have a disability, 21% discrimination versus 11% for people who say they don't have a disability. And there are two pages because we have so many demographics that would make too big of a table to show it at once, but as you can imagine, this is very interesting for executives to see. And there's also change scores, again, available here. And for those, the comparison is every group to itself last year rather than to its majority.
The next piece is about intersectionality because, of course, one demographic doesn't describe anybody very well. So here you can pick patterns of demographics and see how people feel about those same measures, but who meet multiple demographic conditions. Sorted so that it's high to low discrimination. And, of course, the coloring is a basic heat map scheme.
And finally, we have hotspots again here for those four questions. So for each of those questions, we can pull, again, those groups out of the org chart and see where some concerning scores may be. We don't do bright spots here because there's no way to know whether a group is being very equitable and inclusive and so on or whether it may just not be a diverse group in the first place.
Second to last data piece we have here, priorities and turnover intent. So priorities show what people in the group say they most want to work on over the next year. So we can see that prioritization, same click events as before, same thing for turnover intent, which, as you can imagine, if you're an executive, you want to know who's thinking of leaving and where you may have some staffing shortages, and then what's pushing people away. So the reasons people who said they want to leave gave for wanting to leave in the first place.
The last data piece here has to do with just culture. So if you're not familiar with that term, the idea is just that people need to feel, especially in safety-critical applications like healthcare, need to feel like they can speak up and say when something's not right, but also that there are standards and they're transparent and equitable and so on. So you want to be up here in this top right area, and you don't want to be in this off-diagonal. So this is a plotly scatterplot. The background colors, of course, just kind of cue you into where the desirable range of scores for these issues are. And you can change the questions to get different views. You can search and see, for example, all nursing groups at once if you wanted to. And then you can also toggle between work groups, which is the default, and between occupations, because, again, those are similar but specific.
And finally, there's a table below that has these same scores. But you can also, if you wanted to see the lowest groups all at once, you can drag a lasso, and the table updates to only show those groups, and then you can download this table easily to Excel. The tables are using the DT package. So you get the sortability and the downloading and so forth kind of by default, or at least very easily through that interface.
Executives usually want to recognize groups that do well, so it's easy to get additional resources we make available here or to see a table that's sorted automatically by default just to make it easy to do that. And then finally, we have just a bunch of extra resources that people have gotten used to seeing in VA, extra reports, the data stores, another app that's on Connect that we made that makes it really easy to pull specific data in dataset form versus in report form, other dashboards and tools and videos and so on.
The bookmark button is great if you're using Connect because, or I think Shiny Apps does it also. This will give you a custom URL
Featured software#