
Keynote Speaker-Isabel Zimmerman-PyData Boston 2025
Isabel is a Senior Software Engineer at Posit, PBC
image: thumbnail.jpg
Transcript#
This transcript was generated automatically and may contain errors.
So yeah, we'll have our first keynote. So Isabel Zimmerman is a senior software engineer at Posit and a major contributor to open source and champion of open source. We're really lucky and excited to have her. So Isabel, let's have a round of applause.
I did data science for a while. I transitioned into doing data science tool making because I just really love tools. I was actually the first full-time Python open source hire at Posit. And this was at a time where this company was still called RStudio. We actually already had people who were building Python stuff, but I was the first person who was dedicated full-time for solely novel open source Python work.
I started out building MLOps packages, but nowadays I'm spending most of my time building the Python experience in different IDEs. And I have to say, my love of tools is not really constrained just to computers. If you haven't noticed, there's a giant magical goose on this slide. I am a huge fantasy reader. I'm actually flying to Boston from Salt Lake City, Utah for a fantasy conference.
And one of my favorite fantasy books, there's this magical earring, no spoilers, but there's this magical earring that's passed down from generation to generation. And it turns out that this small little thing can beat the big evil bad guy at the end of the book. And I think there's something really special about having something in hand, even if it doesn't feel like the craziest, most magical tool, that can help solve a problem.
And so I assume you're here not to hear about my favorite fantasy books, but probably because you are either a Python user or someone who uses and loves data or some Venn diagram overlap of these two. And maybe you're thinking, why do I even care about the tools I use? This is a whole talk about them. Why dedicate all of this time and energy?
And so just off the bat, I consider a tool anything that you can use to carry out a particular function. This talk is about tool users and tool builders. And maybe you identify with one of these identities more than the other. And they're not mutually exclusive. But if you think about caring about using tools, maybe you feel more strongly about this identity. You know, I'm someone who consumes these things.
If we think of this as like some sort of fantasy story, the villain in our tool user persona would be the inertia of only using the tools that you already know, because, you know, it's easier. You already know them. And so I'd like to consider that using tools and picking tools thoughtfully is very important. And choosing the right one will make your life way easier.
So I'm someone who probably like most of you, when you get out into this data science world, you start using something called Pandas. And this is a data frame library that, you know, it's what I was taught in school. It's like one of the most popular libraries out there for Python. And I thought it was a great place to start. And it served me very well for a long time. The data I was working with started to get bigger. And Pandas started to feel a little bit slower, a little bit clunkier. So I ended up changing over to something called Polars. The syntax is very similar. But it's built on Rust. And it's a little bit faster. And this was a simple switch. You know, a lot of the API is the same. But it made my life feel a little bit faster.
Tool builders and tool users
And everyone sits somewhere on the spectrum of like building and using tools. And maybe you don't identify immediately as somebody who is building tools. But I'd like to convince you otherwise. One, I think it's important for everyone to even think of themselves as like a little bit of a tool builder. By building a tool for yourself, you can speed up your own work. And speed up not only your work today, but your work for future you. The barrier to building a tool has never been lower with the help of AI. You know, your cloud, your copilot, your chat GPT is able to help you build Python packages if you don't know where to start. And this tool doesn't have to be something that's widely shared into the world. It can be something that's just for yourself.
As part of my own personal book nerdiness, I actually rebind books as well. And I have to do specific measurements for like the cover, the back, the spine, and all of these little hinges in between. And I like a very specific like 7 millimeter gutter. And all of the tools out there are all 5 millimeter. So I made this silly little like literal piece of plastic to help me make my own books. It's something only I will ever use. But it helps me every day. Well, maybe not every day. Every day I'm doing hobbies to make these books better.
So your tools don't have to be anything fancy. And also you might already be building them. Maybe you're someone like me who is fat fingering Git push all the time. And now has an alias on their computer so that when you write GTI push, it actually uses Git instead. You know, if you're writing code, you're building something. A dashboard or a report, that's a tool. It's a tool used to make business decisions. Maybe you have a function that you've been slacking back and forth with your co-workers. That's a tool that you've been helping streamline your data workflows. Or maybe something else entirely. So these are all tools you might already be making.
And when I transitioned from like a data scientist persona to a tool builder persona, I thought that I knew what mattered. I had already filled my bag with lots of tools. You know, I could write clean code. I could design APIs. I knew how to optimize performance. And I thought, you know, if my code works, people will use it.
And we actually do a fair amount of user testing at Posit, where we will watch someone use tools. And I have watched truly like the most brilliant people that I look up to so much get confused about a tool because a button is in the wrong spot. Or, you know, you press refresh and it only refreshes like the first layer of your namespace and not the whole namespace. Or just something. And people get stuck. That's a part of life. Tools are imperfect because they're built by imperfect people and you are an imperfect user. And I think that is like the most beautiful thing to think about. Because if we are both imperfect on kind of both sides of this coin, that means that there are skills there that we can build and that we can learn.
Tools are imperfect because they're built by imperfect people and you are an imperfect user. And I think that is like the most beautiful thing to think about.
Hard skills and soft skills of tools
So, if you're like writing your resume, we often think about things in the scale of like there's hard skills and then there's soft skills. You know, hard skills are things like I know how to use scikit-learn. I write C++ code. And soft skills are like I can lead a team or I'm really good at writing documentation. And there is a lot of baggage around these terms that I want to address. You know, LinkedIn, it feels like, swaps every year. Half the time they say, all you need are hard skills. We don't care about soft skills anymore. And then it's all you need are soft skills. We don't care about hard skills anymore. And I'm not interested today in like really convincing you of one or the other. But more thinking about this dichotomy of the ways that we're working.
And when I was thinking about this talk and like how to talk about tools, I realized that tools have these same hard and soft skills. So your hard skills and tools are these computer elements. Things like modularity, reproducibility, flexibility. Whereas the soft skills are more of these human elements of your tools. Things like knowing your user, understanding their needs, delighting in the small things, and explaining your work. And here's the thing. Your tools, to be really delightful tools that people love to use, have to have both these hard skills and the soft ones.
Modularity
All right, so we'll start with what a tool's hard skills are. With modularity. So when you're building a tool, a lot of times the first thing you have to think of is like, where are we starting from? And one of the first Python packages I was building is called Vetiver. And it's an MLOps framework. And there's this element of building APIs. And I had to decide, where are these APIs coming from? I could, of course, go spend all this time, research, build out my own REST API generation. But I realized there's so many fantastic community tools out there that I can leverage what's already been built in the open source community and be able to have this relentless focus on the MLOps side of the work.
So for tool builders, finding tools that work with what you already know is super important. And as a user, it's also really important to maybe understand what your tools are being built off of, to have these clear extension points. Tomorrow morning, one of my colleagues, Jules, is going to be giving a talk about how to build an extension package for different Python packages. I would love for 2026 to be like the year of Python extension packages. I want to see everyone making specialized tools for what they know. Because it makes your life a lot easier when something works exactly for you.
The other good thing about modularity is, if we think about LEGOs, you can connect them to themselves. So if you're building on a modular surface, you're able to have a tool that grows with your user. Like in the Vetiver case, I can think of, you can do all these cool things with models, like what I've added. But you're also able to make really complex APIs because of the basics of FastAPI.
OK, so this is a talk about tools. And it's only fair for me to eventually introduce the tool that I spend most of my day job on. And that's an IDE called Positron. So Positron is an IDE built specifically for data science. And if you're a VS Code user, it might look a little bit familiar. It is a VS Code fork. And it has different elements, like this Variables pane, this Plots pane, including a film strip on the side, and this built-in console, along with many other goodies that we've added to help emphasize this data science experience.
But we built off of the basics of Code OSS, which is this open source VS Code. And it's useful for us because we are able to, again, have this relentless focus on the data science experience. But also, it's helpful for users because they can plug into this huge ecosystem of OpenVSX and VS Code extensions. Something that we've gotten for free because of this is the idea of growing and outgrowing your laptop through Remote SSH. And another one of my colleagues tomorrow is going to be giving a talk on Positron and Remote SSH.
Reproducibility
So alongside modularity, we also want reproducibility. And data science folks know all about reproducibility. We think about it and how it is the foundational thing for trustworthy science. There's a strong pull for exploring data in reproducible ways. But there's also this tension because there's a lot of really cool ways to explore data with a GUI, where I'm going to load the CSV, and I'm going to click around. And it's a little bit of a mismatch.
We can think about when we're doing data science or when we're doing data work, where are we doing steps that are not reproducible? Maybe you're loading your data from a CSV in Pandas. You're doing some exploration. And then you're exporting it to Excel and making a table there. I would encourage you to think about this workflow, the places where, when you tell your colleagues about it, it's, OK, download this, and then click, and then click, and then click. Because those are things that are going to be less reproducible over time. There's an open source package called Great Tables that allows you to move directly from building your data exploration with Pandas or Polars right into a table, or if you want to use plots or something like that.
So thinking really about completing your whole cycle in Python or in some sort of language where you can pass this off as a reproducible artifact, rather than saying, OK, here are all of the steps to click all of the right buttons to get your data in the right place.
In Positron, there's something called a Data Explorer. And when we first built it, there was a little bit of this trap of, look at how beautiful this is. It can handle millions of rows, millions of columns. You can add filters. Here, I'm looking at different literary prizes. I want to look at Booker Prize winners. I'm able to look at my data in ascending or descending order. It's a great place to do quick data exploration. But if you notice, it's a lot of, if I was telling a colleague, I would say, add this filter, add this filter. So we added this button where you can convert to code, where because all of these filters are built in a very modular way, it has a one-to-one balance of, I can click this button because I know what the filters are. I can generate code, not using AI, just using a tree to have code that works and runs and generates the same data exploration every single time. So I think this is a really interesting way to bridge the gap between GUI exploration and having a reproducible output.
Flexibility
And flexibility, when I think of flexibility, I think of a Swiss army knife, perhaps, in tools. So it's convenient. Swiss army knife, you can have scissors and a bottle opener and a knife or whatever else you need for your little tasks to open the boxes. And I'm pro-Swiss army knife, but you're not going to use a Swiss army knife to build a house. So you have to think about what kind of flexibility works well in the projects that you're building.
And if you're not aware, there's something called the Zen of Python. If you type import this, it will print it out. And it has these core values of how to use Python as a language. And one of the ones that's really interesting to me is that simple is better than complex. Complex is better than complicated. And I know that this slide says flexibility, and I really do want to encourage that. But I also want people to think really thoughtfully about the kind of information that they're ingesting in terms of when you're writing functions or something like that.
Lucas Lange, there's a PyCon keynote from 2022. I highly recommend it if you ever write any types in your code. And he really talks about a function that has insane types is probably a code smell of you need to break this down and make your code slightly less flexible. And I did fall into this trap. When I was starting to build tools, I would write a function, and I would type it, and I would see these crazy unions where I'm supporting Polars data frames, and Pandas data frames, and lists, and dictionaries, and tuples, and all of it, when I really should have focused on building modular things for each function, and then combining them in a more clever way. Because if you have something that accepts really complex inputs, it's going to be way harder to also support very complex tasks.
I think the rise of AI tools recently has highlighted the importance of starting with a reasonable amount of context, and then letting users configure more or less information. Something that we've noticed is like AI often does not have all of the right context for data science work. If it's only doing static analysis, it only knows that there is some sort of data frame called data, and it's getting this information from this URL. So in Positron, there's this thing called Positron Assistant, and it gives the LLMs a little bit of your data. It gives them images from your plots and whatever else you want in order to get the most out of AI for your data experience. This kind of has like this happy path, or we think of it as like a pit of success. So if you just open this up, it will have the context that's important for your model to make better information for a better output. But also, we care about privacy, so we need a really clear way to add and remove context.
AI and the soft skills of tools
These are what I think of as the hard skills, hard skills of a tool. And there is one thing that is front of mind that I've touched on very briefly, but that is the fact that AI 2025 is the era of AI everything. If you're writing code these days and it's out in the world, it will probably be read by AI assistants, or perhaps it's being read by Amazon Bedrock, by your local models or whatever. And AI is really good at these strong skills. You can almost use them as keywords to get better output from your LLM. Make this function more flexible, make this function more modular. But what AI does really struggle with, even with the right context, is what users actually need, or these soft skills.
So great tools require technical excellence, but also user empathy. And I was looking at the Stack Overflow survey. There's 25,000 responses. And these are things that people do not plan to use AI ever for this task. And it looks like developers are showing the most resistance to these high-responsibility systemic tasks, like deployment and monitoring. But also really interesting is project planning. And I think part of project planning, in my mind, my hypothesis, is that project planning is a lot of the soft skills. Project planning is realizing my stakeholders need this output. My stakeholders think this is important, or I think this is important. Or that product knowledge that you carry with yourself and with your team, kind of these soft skills that AI is not able to take over for us.
So great tools require technical excellence, but also user empathy.
Knowing your users
So when we think about the soft skills, I really want people to think about the human side of a tool. And the first thing is to know your users. At Posit, a lot of us were data scientists. So we have a lot of data science knowledge. So we really intimately know the struggles of what we were running into. And I think something interesting here is the difference between a data scientist and a software engineer. Data work inherently looks a little bit different than what we would consider software engineer work. Data practitioners are more on exploring uncharted territories. They do a lot of iteration and a lot of experimentation. And then software engineers, we can think of them as building castles in known lands, things that involve a lot of structure, and things that have a very specific outcome. So you're not going to give your cartographer a mason's tools. So knowing your users is really important. And if you don't know, it's OK to ask. I think that's a very thoughtful way to approach this problem.
For us, we realize that a tool that's built for software engineers might not be the best fit for someone who needs to do a lot of experimentation. And one way we really wanted to show data scientists that we understand their struggles and something that I use almost daily is having a permanent console available in Positron. So if you're not familiar with having some sort of iPython console, you can think of it as sort of a Jupyter Notebook light that's built in, that's always on, that's activated all the time available in your IDE.
So in this little GIF, I'm just mashing Command-Enter all the way down. If I was on Windows, it would be Control-Enter. And you can see I can run this line by line. This plot is slowly getting built up as I'm executing different pieces of it. And it's really built for this quick experimentation. I'm also able to go down there and type in this console. And it's fully syntax highlighted. You're able to get code completions there. And we've actually made changes to the LSP, or Language Server Protocol, inside the IDE so that it's optimized for data science use. This is all things that we realize data scientists, if they're using a data frame and they have either a period or your brackets, they probably will want a column. So we focused on making sure column completions come to the front. And just these little things that really show that we know how data scientists work because a lot of us are data scientists or were data scientists.
Discoverability
And discoverability is a lot of this work. So when you're thinking about sharing a report or thinking about building a tool, you want to really be this little goose that's shining your lantern on the path that should be traveled the most. This is really kind of a subset of knowing your user. And all of these soft skills really come from a place of empathy for the people who are using the tools that you're building, or thinking about how you want your tool to be used. How you want your tool to support you as you're using it.
So for discoverability, we ought to think about what will users care about? And how can we help them find things that are not immediately obvious? Discoverability really is an eternal quest. Your hidden features are going to be like secret passages. Just because there's treasure at the end of this tunnel, if nobody knows it's there, nobody's going to find it.
So discoverability is really hard in documentation, in the actual API, in the UI of an app that you might be deploying. Documentation is really helpful, but it's also equally important to have things that are self-documenting. So when you're using a tool, we want to think about, are things easy for me to remember so I'm not looking at the docs every single time? Information can be hard to find and retain, so bringing to the front of what actually gets used is super important.
So we want to think about, what are things that are high impact but hard to find, or things that are constantly used? There's a project called Quarto that is a open source publishing and authoring platform. It's actually how I've built all of my slides. It's how I've built my website. You can build dashboards with it. You can integrate it with different applications, like Shiny. And it's a really beautiful tool that we wanted people to have a great authoring experience in Positron with.
So here you can see that it looks like just a markdown file that you're also able to run code in. I honestly, I could give a whole nother talk about how much I love Quarto. It is one of my favorite things to use. But there's a lot of settings, and there's a lot of ways to use Quarto. And when I'm building out something like these slides, a lot of times I'm making changes, and then I'm pressing Save. And I'll make changes, and I'm pressing Save. And when you're rendering these slides locally, that is kind of annoying to bring down my whole little local host and restart it. So we have this editor action bar where you can see there's this preview, this render on save. You can toggle between source and visual, insert a code cell. We wanted to think about what is important for users. If you're using something like this, you're probably wanting to run code in markdown. So we're going to have a button to insert a code cell. We also want to elevate the setting of render on save. It's not the biggest feature. People aren't going to Positron saying, wow, I can render on save now. But being able to elevate this because we think it's important for users for their own workflow.
Little things with big impact
Next, we'll talk about letting little things have big impact. So sometimes a tool can kind of feel like death by 1,000 paper cuts. Each tiny drop of improvement is a sort of healing solve. If you've ever, I know when I was learning Pandas, every single time that I tried to do a join, I would have to go to the documentation because it just didn't quite feel ergonomic to me, coming from a more SQL-y or R background. And I was just like, oh, I wish I could remember this a little bit better. And I think this is a really cool way to think about, how can we find these little things that can make a big impact in our projects?
So we have to do the painful things, and we have to do the painful things more often. And it's funny because there's two big things that happen when you do these painful things. One, you either get really good at doing hard things, or you discover a new way to make the hard thing a little bit easier.
So one thing that was small but really annoying was, if you've ever used an IDE and you're running just a regular script, and you click this little Play button, and it'll either run it in the terminal or in Positron, you can also run it in the console. And 99% of the time, that's super great. All I want is this Play button to run my Python file in my terminal. But sometimes I'm building a Streamlit app, or a Dash app, or Shiny app, or a FastAPI, or Flask, or whatever. And when you click that button, it actually just runs the Python file. It doesn't actually start the application. I know some people have changed their API because of this, because it's kind of annoying. But something that we built in that we thought was interesting was that, when you press the button, it actually runs Streamlit, run app, in your terminal, rather than just running the Python file itself. So this is something that it's a small, small change. But if you think about people who have used this package, or people who are reading your reports, or something like that, you can think about, what are things that people say, oh, I have to click a couple times for? And maybe it's not even a complaint. It's just an observation. Being able to minimize those small paper cuts really makes a big impact time over time.
Explaining your work
Our final way of having a soft skill is to explain your work. So the idea of a rubber duck is very popular in programming. You have this little companion that you can sit there and talk to. And oftentimes, when you're talking through your problems, your solutions just kind of fall out on their own.
So here's the thing that happens when you explain things. Also from our Zen of Python, if the implementation is hard to explain, it's a bad idea. So of course, being able to write documentation and things like that is helpful for the handoff of work when you're collaborating with others. But selfishly, oftentimes, when you're explaining your work, you kind of are able to figure out what is a good idea and what isn't. Having clarity on the code that you're writing is really able to help you supercharge your own ability to comprehend the problems that you're solving and then build whatever you would like later.
This is something that I saw on Blue Sky. I don't know if you call this a tweet or not. But on Blue Sky, I think about this almost every single day. If your writing helps even one person, it's worth doing, especially if that one person is you. And of course, like the original context of this is literary people wanting to become an author. But I think of this all the time when I'm looking for a new package to use. I want to see, is there not only API documentation, that reference page, but is there a getting started guide? Are there examples or some sort of a gallery?
If your writing helps even one person, it's worth doing, especially if that one person is you.
I also think about this when I'm writing functions for myself or for others. Things like doc strings and types, it's not mandatory in Python. You can run wild, make the craziest wild west of functions you desire. But when you add types and doc strings to your functions, you can realize, oh, this doesn't make too much sense. This argument is just like the other one. Or maybe you're like me, and sometimes you add types and realize your scope is way too broad. And it also helps yourself, but it helps your users as well later to pick up this information and say, oh, I should be passing a pandas data frame here. I should be doing whatever. So writing down your intent is super important, even if it's just for yourself.
And the hard part can be surfacing this information later. One more thing that I want to focus on is this help pane in Positron. So if you just write something in the console and put a little question mark next to it, it will pull up this pane, and it will render all of your doc strings. And it has syntax, not really syntax highlighting, but formatting, rich API formatting for your NumPy and your Sphinx and your epitext formatted doc strings. If you're linking to other functions, all of your documentation is interlinked. So it's helpful because you don't have to go all the way over to something like a doc site if you're just looking for something really quickly. You can also do this in Jupyter and IPython if you do double question marks.
So it's really helping to get this information closer to users, even if that user is just you, to really close this feedback loop. to close this feedback loop between writing documentation and being able to use it again later. And docs, like, they're a good place to write things down, especially when it's specific to a certain function.
But oftentimes, I'll have half-baked ideas or ideas that don't quite fit into the types or a doc string. Sometimes these are my half-baked ideas on architecture or something like that. And this is where I have really enjoyed using something like a Claude MD file for things that it's not good enough or it's not fleshed out enough to go to a doc site. It's not going to be sent to my colleagues or anything like that, because it turns out robots are really good at completing your half-baked ideas.
I think it's a great maintenance task for something that maybe it's a project you only do part of the time, and you're going to put it down and pick it back up again in six months. It's a great place to write things like your architecture or the purpose of a certain function or if there's any gotchas for your project, and maybe even a happy path. We should always want to use this data frame, pull from this database, something like that. This is a great place to do super lightweight, very low stakes documentation. It's nice, because it doesn't have to be as polished. But if we think of the open source virtue of release early and release often, I want us to think about share early and share often. Bring these thoughts out of your brain as early as possible so you can add clarity to your work later on.
Closing thoughts
So we want to think about tools really holistically, these hard skills, these soft skills. We can think about if we have this compass, that it always points towards beautiful tools for ourselves and for others, and always making sure that we're choosing the right tools for us to live our best lives as data practitioners. We want to consider the input and the output of our work and take on as much complexity as we can as a builder to make a user's life easier.
There's a lot of feedback loops, especially when you're building projects, where people are using the tools you're building, noticing pain points and sharing them with others. And this is how our ecosystem improves. We're all both consumers and contributors. And everyone's unique perspective is really important as they bring in a different way that they're traversing through this space.
So building tools, it's not about this final destination of the perfect package, the perfect report. It's this journey that really never ends. Using tools reveals what's needed and what to build next. Building reveals what's possible, what we can build for others. And sharing really just starts this cycle anew. So every great tool starts as someone's story of triumph or frustration. So all of your blog posts and your conference talks, your GitHub issues that you're opening, your LinkedIn posts that you're sharing, sharing what works and what doesn't helps everyone improve. So I would really encourage, give feedback because it matters. Your tools make other people's work better. And your feedback makes our tools better.
So what makes tools stick? There's the hard skills of modularity, reproducibility, flexibility. There's also the soft skills of knowing your users and emphasizing discoverability, building little things for big impact, and explaining your work. This is all driven by a mission for people to have the most beautiful experience with the open source tools that they're using and for people to feel empowered that they can build tools small, medium, or large. So thank you all.
Q&A
There's a Posit booth. I'll be there. You can learn more about Positron at this site. That's also my website. And I'll be taking questions now. Thank you.
I'll start with the front. Yes, a question about the plotting pane. I see that you had a MATPLOTLIB plot up there. Do you support other plots or packages to C-borne plotly or anything like that? Yep, the question was about Positron's plots pane and what sort of packages that we support. So we support MATPLOTLIB. Talking about modularity, most, actually, all Python plotting packages are built off of MATPLOTLIB. So we actually get a lot of support just from supporting MATPLOTLIB. But other packages that we know work well are plotly, C-borne, plot9. There's a whole bunch of others, but I think that's kind of the main ones that I can think of right now. So yes, many plots are supported. If you ever find one that isn't and you would like it to be, open a GitHub issue and we can support that for you.
In the back. You said that building for data scientists can be different than building for software engineers. There was a talk a couple of years back called, Why Software Engineers and Data Scientists Want to Murder Each Other. But when you're going from a tool user to a tool builder, you are shifting your own mindset from a data scientist why to a, or what, to an engineer's how. What's that shift like, and how are you able to make that work?
So the question is, when you go from data scientist to tool builder, these are, I talked about, they're two different skill sets. And how to make them happy together, and how to live with the two wolves inside you, I suppose. I think a lot of what I think about is the empathy for the data scientist leaks into you as a software engineer. They are a very different tool, different ideas, different skills in some ways. So I think knowing what the outcome is is important. Even like the plots pane, my data science brain says we should be able to iterate through all of these lines. But the software engineer part of my brain says, OK, when we're using plots in a notebook, oftentimes it's all one chunk. So how do we make sure that both of these work? So I think it's acknowledging that these are two different skills for software engineering. A skill that I might be doing is I'm writing more end-to-end tests, or unit tests, or something like that, than I might as a data scientist. But having that empathy of knowing the problems to solve is super important. So thank you for that question.
I'll go here, and then I'll go over to you. So my question is, one of the things you do is you build a lot of features for the IDE or in the process of building small little tools into it. And what I wonder about is, what kind of signals or feedback do you use to figure out if this feature that you've just added was actually a good idea or not? Because I mean, looking at my own experience, like building a new feature, you probably think it's a good idea, which is why you release it. But that doesn't mean that your users actually agree with your enthusiasm. So how do you go about figuring that out?
Yeah, the question is, you get a lot of feedback from people, and we build a lot of features. So how do we decide if, one, a feature is needed, and two, if not only is it needed, but people actually want it, which are subtly different things? I think one thing, we're lucky enough we have an active online community of people who are willing to tell us, I want you to support x, y, and z. So I think that is one way. And another is the intuition of, because we know our users, I think of, I was really struggling with deploying Jupyter Notebooks, or I was really struggling with transferring from this data science library to this one. So being able to infuse our own knowledge of knowing the users is also a good place to start. I think asking questions is super reasonable, saying, hey, I'm thinking about this. Is this something that would work? Is this something that makes sense from your stakeholders? Closing this feedback loop and not being afraid to just say, early on, can we have feedback on this, even if it's as a prototype or something like that.

