Resources

Integrating Shiny with Epic EHR | Matt Maloney | Data Science Hangout

ADD THE DATA SCIENCE HANGOUT TO YOUR CALENDAR HERE: https://pos.it/dsh - All are welcome! We'd love to see you! We were recently joined by Matt Maloney, Director of Applied AI and Data Science at City of Hope, to chat about applying data science to cancer care operations, integrating open source data science tools like Shiny with Electronic Health Records (like Epic), and the evolving governance of generative AI in healthcare. In this Hangout, we explore the technical and operational strategies behind integrating custom data science applications directly into clinical workflows. Matt discusses how his team moves beyond standalone tools by embedding Shiny apps and other solutions into Epic, allowing medical coders and providers to access predictions and summaries without leaving their primary software environment-of-choice. He also mentions the "build vs. buy" decision-making process as vendors release their own AI solutions, emphasizing the importance of validating external models against their specific patient population. Resources mentioned in the video and zoom chat: City of Hope → https://www.cityofhope.org Unity Health Toronto Customer Story → https://posit.co/about/customer-stories/unity-health-toronto/ pointblank (Data Validation package) → https://rstudio.github.io/pointblank/ If you didn’t join live, one great discussion you missed from the zoom chat was about *where* data science teams sit within community members' organizations and whether they like it or not, specifically the pros and cons of being housed within IT versus embedded inside business units. Participants debated access to infrastructure versus proximity to business stakeholders, with several sharing their own experiences of shifting between these departments (or between companies with different structures). Let us know below if you’d like to hear more about this topic! ► Subscribe to Our Channel Here: https://bit.ly/2TzgcOu Follow Us Here: Website: https://www.posit.co Hangout: https://pos.it/dsh The Lab: https://pos.it/dslab LinkedIn: https://www.linkedin.com/company/posit-software Bluesky: https://bsky.app/profile/posit.co Thanks for hanging out with us! Timestamps: 00:00 Introduction 03:37 "What does the data science function at City of Hope help with?" 08:52 "Tell us a little bit more about how you're integrating Posit with Epic" 16:08 "How do you handle the needs of privacy with the push to adopt AI?" 18:40 "How do you manage to stay abreast of technical advancements?" 22:45 "At what point do you hand off your data work to the software engineering team?" 27:23 "How much has development that involves LLMs and generative AI taken hold?" 30:38 "Does your team evaluate a lot of the things that Epic might be throwing your way?" 34:41 "How does Epic pass an encounter number or a patient ID to Posit Connect?" 35:57 "How does your team handle these nuanced pieces of clinical information?" 40:29 "Do the administrators appreciate the time that it takes to do things?" 44:22 "What happens in the academic division?" 46:10 "Do you have a piece of career advice for us?"

Jan 15, 2026
54 min

image: thumbnail.jpg

Transcript#

This transcript was generated automatically and may contain errors.

Hey there, welcome to the Paws at Data Science Hangout. I'm Libby Herron, and this is a recording of our weekly community call that happens every Thursday at 12 p.m. U.S. Eastern Time. If you are not joining us live, you miss out on the amazing chat that's going on. So find the link in the description where you can add our call to your calendar and come hang out with the most supportive, friendly, and funny data community you'll ever experience.

Can't wait to see you there. I am so excited to introduce our featured leader today, Matt Maloney. He's the Director of Applied AI and Data Science at City of Hope. Matt, it's so nice to have you today. How are you doing?

Doing well. Glad to be here today.

Wonderful. It would be great if you could introduce yourself and tell us a little bit about what you do and something you like to do for fun.

Sure. So what do I do? I'm a Director, as Libby said, of Data Science at City of Hope. City of Hope is a large cancer treatment and research organization and has been recognized as one of the nation's top hospitals for cancer care. We have hospitals in a few locations. So Los Angeles, Orange County, Phoenix, Atlanta, and Chicago. And I've been working as a data scientist and now in a leadership position there for about five years. My team specifically focuses on building data solutions that are more operations focused as opposed to research focused. There's a separate group we do work closely with that supports research. But I'd be focused on things like, for example, helping with predicting claim denials or helping manage something for philanthropy and help them find a solution.

So that's a little bit about what I do professionally. Like I said, I've been in health care now for about five years at City of Hope and before that University of Utah Health System, but have a background in finance and econ before that. Something I do for fun. I live in Fort Collins, Colorado with my wife and two small children. And whenever I can, I like to get outside. So climbing, hiking, mountain biking, snowboarding, you know, I really find that stuff helps me recharge. And I love being just outside whenever I can be. It's a beautiful day here with a lot of snow on the ground, actually.

I was just gonna ask, did you get snow? Because my friends in Illinois and Wisconsin certainly did.

Yeah, we got, I don't know, maybe six inches yesterday and it's stuck around because it's been cold. Yeah, well, you live in a great place to get outside. Fort Collins is fun and it's a college town, so it's nice and nice and energetic. Well, I already saw somebody ask in the chat a little bit more about City of Hope would be great. So if you could talk a little bit more about what City of Hope does and what the data science function helps with the kind of problems that you solve, if you could go in a little bit more depth there, that might give us some context.

City of Hope and the data science function

Sure, yeah. So the function of, well, starting with City of Hope, City of Hope is first and foremost cancer research and treatment. So we're not doing any clinical care that is not related to cancer. And then on the research side, cancer is the focus as well. And so what does that cover? That covers fundamental research on the research side. It's all kinds of cancer care for every type of patient and disease type. It's also clinical trials, right? Making sure that we're matching patients with the right clinical trials and getting them enrolled in the best care possible. And I think one of the core missions of the organization is not just providing world-class cancer care, but it's making it as accessible as possible to as many people, right? So it's not just getting the highest quality treatment, but also making it broadly accessible.

So what is the data science role at City of Hope? So data science is a kind of broad term that I know the role varies by organization, even across healthcare organizations. You'll have different data science teams doing different things. So we have a kind of centralized group of data scientists in kind of this larger applied AI and data science department. And we're kind of split into two teams. The first team is focused on research applications. So they tend to work very closely with the academic research teams at City of Hope. And then, sorry, I was distracted by the chat for a second. And then my team is really focused on development and solutions for our internal operational needs. So that could be on the clinical side. You can think of something like, you know, we have a proprietary model for predicting or assessing sepsis risk for patients who have received bone marrow transplants. So that would be more like what we would call clinical operations. We're trying to support an operational workflow with a product we developed and manage. And then more on the business side, you know, my team is also supporting requests around, you know, revenue cycles. So this may be, can we have a model that will help us optimize our billing? One of the earliest projects I work on and still support is a forecasting system for our finance team.

And, you know, trying to think of new applications and other areas. Another example is, recently, we've been tasked with supporting our philanthropy team with doing more, using some LLM technology to help better understand and match gifts with research projects. So those are like a few examples, I think, that can kind of create a vision of what our team is doing. Now, you might be hearing in this that the team I'm in is focused a lot on sort of production solutions that tend to come in the form of like, like software or, you know, something we would put into production that would run either as sort of an endpoint for inference from a model, or it might be something that we would set up as like a daily cron job to deliver productions to some people in a workflow somewhere. Or in the case of the clinical models, you know, signal and alert in one of our EHR systems, right, when, for example, a model has a flag that a patient is at risk for sepsis, for example.

Integrating Shiny with Epic EHR

I just wanted to say something about the EHR. And I think that's electronic health record. But I remember our team saying something about how you are integrating POSIT with EPIC, your EHR. And I thought that would be so cool to be able to share with other healthcare people on the call as well. Would you be able to tell us a little bit more about how you're doing that?

Yes. So EPIC is, I'm sure many of you may be familiar with it, but it's the largest organization, or largest vendor for electronic medical record software, right? So when you go to visit your physician or whatever, and you see them working on their computer, there's a good chance that that's the EPIC system they're working in, right? So a lot of the data my team works with is data that originates in the EPIC system. You know, that's where information about patient gets put in. And that's also, importantly, where a lot of the workflows are not just for providers, you know, care providers, physicians, nurses, etc. But also, more and more some of our operational staff, because EPIC doesn't just handle, you know, the medical record side of things for us, but also handles a lot of the billing related work and processes. So we also have what we call our revenue cycle staff working in that environment and with that information.

And so one of the requests that was coming up is to, you know, we would have standalone applications, a lot of them would be living in our PositConnect instance. And so, you know, for example, this is a good, good motivating example, we have a tool that helps our coding team. So this is the team that, you know, a visit happens, and then, you know, basically, it's recorded what treatments and procedures and diagnoses the patients have. And then that moves to then into not only the billing process, but also, you know, just to make sure that we have a rigorous and compliant medical record for the patient. So there's a team that reviews this documentation. And we have a tool that assists them with that documentation. And it was living in PositConnect. And, you know, one of their requests is we do all our workflows in EPIC, this would be a lot lower friction for us if we could embed it.

And EPIC does have a capability of integrating software in a few different ways, right? If you're a, you know, a software vendor that's trying to say develop something that you would want to be available in the marketplace, right, as like a really premier like software solution for other healthcare systems, you would use like a spar on fire framework, you would register as an app developer. But for a lot of our cases, we were hoping there might be a sort of a kind of middle ground to becoming full blown software engineers. And so we've been working with the POSIT team on nesting our applications in EPIC. And, you know, passing a few parameters using what they call, you know, setting it up as an, they call it an FDI record, if anyone cares, I don't actually even know what that acronym means. Maybe somebody else here knows. But you set up this record in their system of where you want this application to live. And then we've been working on some of the details of making sure that the integration goes smoothly. And we're still, you know, we still haven't completed that process, but we're getting pretty close. And it's very promising, because for us, it's a really rapid way to take something we've developed as data scientists, and move it into their preferred environment for their workflows, without necessarily having to engage with our software engineering team, or, you know, get outside of the scope of where we really feel comfortable.

it's a really rapid way to take something we've developed as data scientists, and move it into their preferred environment for their workflows, without necessarily having to engage with our software engineering team

So yeah, yeah, that's, that's what we've been doing, as far as integrating POSIT applications hosted on POSIT Connect with our EHR electronic health record system.

That is awesome. So what I'm hearing is, and we're gonna hop to the slide of questions in just one second, what I'm hearing is you have end users who are medical coders, who are getting to stay in their system, but the information is getting to pass to the applications that your data science team is building, so that they don't have to, like, use a Shiny app, if they're not familiar with it, they can use the applications that they're used to using.

Yeah, so well, they the, you know, the Shiny app is is nested there, but they don't have to, for example, open a separate window, or, you know, type in whatever, you know, arguments are needed, manually, right, some of that information can be passed, which, you know, it's not, for some workflows, it's more critical than others. But but in some of our other projects, where our customers are providers, where their, their workflows and, and time is very, very valuable.

And providers being like doctors and clinicians.

Yeah, yeah, yes, maybe a little bit of jargon. Yeah, we're talking about physicians, you know, physician, physician assistants, nurse practitioners. And the goal here is to eventually take some of the work we're doing, that's more oriented towards providers. And we've already integrated some of our provider facing work through a different EHR integration method, which is like they call it best practice advisory alerts. So we do have other ways of integrating some of our, our work into the EHR system. But this would be another option for us if we can, you know, get it to where we want it to be for for provider workflows.

AI governance and generative AI use cases

But we have so many questions in Slido. Let's get to Slido questions. We might hop around in topics, but I would love to start with Noor's question because your your job title has AI on it. And we're in an AI era and Noor has an AI question.

Sure. Hello, I apologize for my camera being off. Hi, Matt. Nice to meet you. So, just for warning, I am more on the, I think they call bearish side of AI, also known as fancy ML, but with the aggressive push to adopt what people call AI into their workflows, how do you handle the needs of privacy, like PHA, PHI, PII, and so forth? I know you're more on the, I think you mentioned administrative side as opposed to research side, but there's still like privacy concerns and billing and so forth. How exactly are you handling that? Or how are you handling that? Are you just like, no, that's, we're not dealing with AI right now because we have a nice steady workflow and so forth?

Sure, that's a great question. So, we've done, there are a couple of things that are really important. The first is we have set up at CityHope a really strong AI governance framework and committee so that every use of AI goes through a rigorous review process. Where a lot of different concerns from privacy, HIPAA compliance to various ethics considerations are reviewed. The second thing is we've worked really closely with Microsoft to make sure that we're using, we're almost exclusively using OpenAI at this point because of their partnership with Microsoft and because of our existing partnership with Microsoft. So that they've been able to ensure that none of our information that's being passed to any endpoints from an outside provider are, any of that data can be misused or is not compliant with our regulatory requirements.

Very, very cool. Thank you, Noor, for the great question. I have another question here from Arsenis that would be great to ask. Arsenis, are you available to ask that one live?

Hey, Matt, thank you so much for chatting with us today. My question is a little bit about your role, and you kind of described AI at City of Hope, or not AI, but data science at City of Hope. You all seem to have a pretty broad scope in terms of the types of things you all do, from accounting and finance to research to clinical things. And so I'm wondering, how is it that you manage to kind of try to stay abreast of the technical elements of all of those different things, because they are quite different? How do you handle all that in discussions with your various teams?

Yeah, that's a great question. I think that's one of the most, if not the most challenging thing about the job right now, because it seems like the scope of our role is growing very quickly, right? And we're at larger organizations, I think that a lot of specialization has already happened, right? You have your data scientist, you have your AI engineer, you have your ML Ops person, right? And so one of the biggest challenges for me as a leader at this sort of midsize organization is making sure that we're able to not get overwhelmed as the scope of what might be under the data science umbrella or the scope of our department grows, right?

Because it used to be, you know, know some stats and a little bit of ML, and then it grew. And now you've got to know DevOps. And now you've got to know LLM frameworks. And the answer is, you know, the answer is, I don't really have an answer, because this is one of the most challenging things about my job. So I'll talk to a couple of things we're doing to try to address this. One is, we are moving towards a little greater specialization. Well, we're, you know, moving, we haven't done a round of hiring in a few years, but we're moving into one right now where we're going to be identifying more specialized ML engineering and AI engineering roles. So I can have, you know, a data scientist on my team who's, you know, very, has a lot of expertise in a narrow, not a narrow, but it has expertise in a certain area of our data resources, maybe has a really strong science background or statistics background, but doesn't have an AI engineering background. Now we can have somebody to complement this person's skill set, rather than when I get an intake request, I got to just, you know, use who I have. So that's one thing we're doing.

The second thing we're doing is, we're taking on a little bit more of a mature product management framework, so that we can make sure whenever possible, we are able to partner with other teams for data engineering, in some cases, UI, UX development, and DevOps. So we're not getting bogged down in something like, you know, deploying our cloud infrastructure or something like that for a project where it's not an efficient use of our team's time. So those are really the two things as we are moving towards greater role specialization. Also, I should say carving out time for learning, you know, everyone on my team, great lifelong learners, extremely capable learners, almost all of them have PhDs, but I got to give them the time, right, to take the course or study to bring up their skills that, because the skills and tools are constantly changing.

Handing off to software engineering

And there's an honest question that is a little bit of a follow up on this. And it's at what point do you hand off your work to the software engineering team? I think you touched on that a little bit. There's a lot of AI slash data science folks can do, especially with things like PositConnect. But at some point, those tools get handed off to the production teams, right?

Well, yes. And, you know, I think, when do we hand it off? This is a good question. This is another thing that's still something I'm figuring out. You know, I can sit here and say I figured it all out. Some data science teams don't hand it off at all. They're like, yeah, we don't give it to IT or anybody. We handle it. But a lot of them do. We are in IT. So the answer to that, I think, is usually, yeah, that's another thing that's a little different about our organization. Yeah, we sit in IT, which is, I don't actually know what might be the norm across the industry, you know, to be in IT versus outside of IT for these roles. I think it varies, but we're within IT.

And I think really where we are right now is sort of we hand it off when, like, a product manager or the customer says that whatever we deploy in PositConnect, for example, is not good enough. So I draw the boundary on, like, you know, for a lot of the things we're doing, what we're able to deploy in, say, PositConnect is sufficient or delivering a report, you know, on a cron schedule is sufficient. But really, the answer to that is guided by either our customers in the organization or the product team when they're like, look, we need a better UI UX. We need better performance as far as latency. Something like that is where we would be like, OK, we're going to pass this. We're going to pass this on to the software engineering team. And sometimes it's sort of a collaborative hybrid approach. Like, you know, we have a project now where we've got some sort of like services deployed as fast APIs on PositConnect, and we were operating the UI UX and the customers were kind of like, you know, we kind of want a little better UI than I think what you can offer. And instead of like doubling down and letting my team work on that more, I said, you know what, this is time to hand it off. So they're going to start working on building the UI and they may end up migrating those APIs as well. I'm not sure, but kind of just depends on when we decide we've met the customer's needs.

All right, cool. Well, we're getting lots and lots and lots of feedback in the chat about being inside or outside of IT. And I wanted to call out Eric Nance's hot take here, which is if you follow best practices with software engineering and their data science teams, you may not need to hand things off. It's a common misconception to this day that something can be prototyped in Shiny or general artifacts on PositConnect, but they absolutely can stand the test of scaling and optimal UX. Hear, hear. I think that that's great. I don't always have the bandwidth in my brain to become a pseudo software engineer and develop all those software engineering best practices with all of the like other stuff, stats and ML and stuff that I already have in my brain. But if you can do it, or just work really closely with them to understand what they need as best practices, it can be great.

Generative AI applications at City of Hope

Yeah, great, great question. So, really, in the last two years, the percentage of our work that has focused on applications of generative AI has really grown, right? It's a huge percentage of what we do now. And I think a lot of that is just driven by demand, but also, you know, finding the right the right opportunities. And I think to give you a few examples of where we've had success. Clinical documentation, reviews and analysis is huge. So sometimes those are to support coders, sometimes that's to support clinicians.

A big one we've worked on at City of Hope is a summarization of documentation for new patients that come in. So being able to take all the documents we get from external sources, right? And in some cases, doing the OCR, converting them and then generating summaries for the providers. This is one of our kind of branded as our Hope LLM products as new patient summarization. So that's another example. Clinical trial matching and feasibility assessment has been a big one, right? Where we're able to take, say, a document that outlines the inclusion and exclusion criteria for a clinical trial. And sometimes those criteria are very complex and it's sitting in a text document. And then being able to, in an automated way, apply to an individual patient or a patient population, are these inclusion and exclusion criteria met for a given patient, given all of the clinical documentation we have for them?

So those are a few examples. And something we're moving into a little bit now is, you know, maybe looking some at payer documentation. So like payer contracts, things like that. That's something we haven't done a lot of yet, but we're looking towards doing in the next maybe three months or so.

Build vs. buy: evaluating Epic's own AI solutions

I do have one quick follow up around some of these, like a lot of these are things that vendors actually also provide. So do you all use or does your team evaluate a lot of the things that Epic might be throwing your way for, for example, document summarization or their predictive models too?

Yes, yes. So that's a great question. And that's something that there's always this sort of difficult, you know, sort of build versus buy decision that is complicated by how quickly the landscape is changing. So a lot of the things we've built, you know, like some of the analysis of clinical documentation, when we started that work and rolled out our first version, it wasn't available from Epic yet. Some of the Epic generative AI solutions are just being rolled out now. And so moving forward, we're going to apply the same governance framework we have had for predictive models in the past, which is we don't want to build something ourselves that Epic has already solved, for example. So what we'll do is we, as part of the AI governance process I mentioned before, our first role is to sort of evaluate if that solution is sufficient for our needs, you know, so is Epic solution have adequate performance to meet our internal requirements for whatever this use case is.

And so one of the roles of my team, especially those who are focused on the clinical side, is to do some validation work of those models. So it would be, and we've done this before, for example, I mentioned the sepsis model. Step one in that project was validate that whatever the Epic model was at the time, see if it meets our requirements. If it doesn't, maybe we have a case for developing our own solution and we revisit in a few years if they release a new version. And I think moving forward as vendors like Epic roll out more and more of their own solutions, I think that this validation work is going to become a lot bigger part of our job, you know, to be frank. I think it's going to be because I don't think we're in a place where we're comfortable, even if it's coming from Epic, just rolling it out with some kind of internal validation. So as they roll out more and more solutions, which is happening more rapidly, I think it's going to become a bigger part of our responsibility and a bigger part of our role is going to be doing the validation work. And then, of course, developing where whatever they have is not necessarily meeting the requirements for our specific organization.

And, you know, an area where this has come up in the past is just that we have a very specialized patient population. So sometimes our needs are a little different. So, for example, I don't see us using a vendor solution for some of the work we're doing around clinical trials, for example, because we have a lot of proprietary data. We have a specialized patient population. But, you know, for stuff around saying like drafting denial response letters or something for billing, that's something where I think, you know, we could probably validate it really quickly and be like, yeah, the vendor solution is totally sufficient for our needs.

Data quality and clinical subject matter experts

Yeah, that's a great question. Let me sit on that for a second. I think that the short answer is, you know, we use a lot of different tools and we spend a lot of time on data engineering and cleaning up the data. You were bringing up blood pressure information. And I had one of our Ph.D. data scientists working for six months just cleaning up some of the data formatting, right? Sometimes you get different units of measurement if you're talking about dosages and things like that. And we still spend a ton of time with just old school getting in the data, cleaning it up, you know, working with data engineers. Because, you know, I think at the essence of your question is a little bit of like garbage in, garbage out, right?

And I am very, one of the things I'm very, I don't know, adamant about is we have a whole kind of suite of tools we work with. And almost all of the solutions we build, it's not just, you know, a generative AI model, you know, just throw it on top of your data. It's basically a combination of a lot of different efforts and tools. And a lot of the time that's, you know, doing a lot of work on cleaning up the data. And not only just cleaning it up, but we work with a couple of physicians who give us feedback along the way. So one thing that I think is often overlooked is how much of a time commitment is required from some kind of subject matter expert like a physician or a clinical documentation specialist to help us through that process of cleaning up the data because we don't have the clinical expertise.

one thing that I think is often overlooked is how much of a time commitment is required from some kind of subject matter expert like a physician or a clinical documentation specialist to help us through that process of cleaning up the data because we don't have the clinical expertise.

So one of the things I try to do early on in a project is say, okay, you all need a solution for your workflow. Or this is going to look like a partnership where we're going to have to work together and we're going to need some of your time to validate, say, hey, have we cleaned up the lab's data in a way that it's consistent with your understanding of what's clinically relevant, right? And then, you know, sometimes that's a much bigger part of the project than writing the code to make the predictions on top of those data sources.

So, you know, I don't know if I fully answered your question, but I think that we take very, very seriously that whether we're talking about, you know, a traditional predictive model or a generative AI solution, this is one, you know, tool that's usually only a limited part of a complete solution. And one of the things is making sure that the data that goes in is of high quality. And unfortunately, that's still, there's no magic for that. That's still a time consuming process that requires time from us and requires time from the subject matter experts. And that's just the way it is. I think that, you know, we could sit and say that AI is going to solve that problem. But no, there's still a ton of work to be done.

Do the administrators appreciate the time that it takes? Like, the people who run the hospital or the facility at City of Hope include that into their budgetary aspects? Because there's another constraint that we run against, which is the healthcare administrator role and whether or not they recognize that as value added.

Yeah, that's another one I don't have an answer to because I think it's a universal challenge in healthcare. And the only thing, I mean, how I'll speak to that is just we've gotten more mature, as I said, with at the beginning saying this is what's going to be required. And then if administrators can't design some way that, for example, the incentives, you know, because I mean, let me see if this aligns with what you're saying, which is basically we have a situation where the organization wants an improvement and say how the physicians are documenting. Something like this, right? But that's going to require physicians to take time to work with us to validate, you know, the results potentially help with some of the helping us to understand the data. But then they're not given the time or any incentive structure to actually carve out that time. Is that kind of consistent with the problem you're talking about?

OK, OK, so, yeah, so I love this, thinking about this stuff, because my background is economics. That's what I did for grad school, kind of trained as an economist. So I'm always thinking about are the incentives of the people who are needing to be helping us or using these tools aligned with the broader incentives of the organization? And if not, how do we align those incentives? And I am now being very assertive about having that conversation early. So this isn't about clinicians, but a recent example is I found out that some of the clinical documentation specialists who are supposed to be our SMEs for validating some of the work we were doing. They said, you know, we have this system that tracks our time and we don't have any code to log our time for this validation work. And so I immediately was like, well, that's a problem we have to solve before we do any more development work here, because that's an instance where the overall organization wants this problem solved. You have users that are saying they can't log their time against this. We've got to resolve that problem before we can do any more development work, because otherwise the incentives are all not aligned.

Career advice and closing

Well, we'd like to ask everybody if they have a piece of career advice that they have really loved or has been really impactful for them. I don't know if there's something that you have on the top of your head or that you think about.

I think there's a lot of value in sort of finding an industry that you're interested in getting a foot in the door, even if it's not in the ideal role. Because, you know, as a data scientist, I think. One thing that becomes really, really valuable is knowledge of a specific industry or domain knowledge of a specific industry or even domain knowledge around specifically the data that your organization has. And so there's a lot of room once you become like an expert or get your foot in the door in an organization for then defining your role within an organization. At least that's been my experience.

I mean, I don't want to say this applies generally across the board because maybe some organizations have more have more rigorous role definitions than than ours does or has. But, you know, I've seen people come in and they find a problem they're interested in really excel in it. And then all of a sudden, especially because it's a growing area, they've carved out like an entirely different like job definition and career path for themselves. And that hasn't just been in my experience at City of Hope, but it's also been I remember my first job out of out of undergrad. This is going going way back. This is in 2010 job market was still a little bit weak. And I got a job that was like at a small financial firm where I thought I was going to be doing a lot of administrative work. And I like in the first couple of months, you know, they said they had some problems with their spreadsheets or whatever. I learned VBA and, you know, all of a sudden I had a totally different job.

there's a lot of room once you become like an expert or get your foot in the door in an organization for then defining your role within an organization.

But I think that's something that that, you know, I don't exactly want to call it a backdoor, but it's it's organizations. I found the roles aren't as rigorously defined as they might look like from the outside when you see a job posting. And so if you get in the door and you have something you're really passionate about, I think there's often a lot of flexibility to shape your role.

All right, everybody, we are two minutes from the top of the hour, so I think that we will have to wrap it up and say goodbye to Matt and say thank you so much, Matt, for this amazing conversation. I hope that you had a good time.

Yeah, this was this was fun. Thank you. Thank you for organizing and and moderating. And yeah, this is a lot of fun. And yeah, anyone in here, feel free to reach out to me on LinkedIn or whatever method makes sense, because, yeah, I'd love to get to know more people who are interested in similar things or encountering similar challenges in this industry.

Let's do it. Thank you, everybody, for hanging out. If you'd like to save the chat, there's three dots in the top of the chat box that you can click and save the chat, but also join the Hangout channel on our Discord server. So POS.IT slash Discord. The link is only open for the next few hours. So if you're watching this on Discord later, that link's probably not working, but don't worry about it. Join us and have a really, really good time following up. I have tagged some people in our Data Science Hangout channel that I know we're all having a similar conversation. We can get there and follow up. Thank you so much for joining us, everybody. And we will love to see you next week. Next week, we have Isabel Zimmerman and Davis Vaughn, two software engineers from POS.IT that work on all kinds of fun things. Isabel works on Positron. Davis does a lot of work with, like, Tidyverse stuff as well. That's super, super fun to ask questions about. So we look forward to seeing you there.