Resources

The Psychology of Technologists (Cat Hicks, Catharsis Consulting) | posit::conf(2025)

The Psychology of Technologists Speaker(s): Cat Hicks Abstract: Technology teams are struggling to be heard. Technologists are grappling with high turnover, high burnout, and low resilience in the face of rapidly-accelerating technological change. Even though effective technical teams are an essential part of our world continuing to work, many organizations struggle to understand their technical teams, stuck in old stereotypes that treat technical people like isolated machines. To build a thriving future, we need a new way of working together and understanding our own minds as we build technology. In my empirical research across thousands of developers, technical managers and their teams, we've explored what a psychologically-rich environment looks like for modern technologists and found important signals that can guide teams toward greater resilience, innovation, and thriving. In this keynote we'll unlock access to the psychological sciences and evidence that you can use to make your every day more human. Links: Dr. Cat Hicks - https://www.drcathicks.com/ Catharsis Consulting - https://www.catharsisinsight.com/ Center for Open Science (COS) - https://www.cos.io/ Fight for the Human Newsletter - https://www.fightforthehuman.com/ posit::conf(2025) Subscribe to posit::conf updates: https://posit.co/about/subscription-management/

image: thumbnail.jpg

Transcript#

This transcript was generated automatically and may contain errors.

Hello, folks. How wonderful to be here with you today. All right, so this is what happens if you let a psychologist into your tech conference.

I want you to take every little piece of anxiety. I mean, really, like, move your body. Like, take it, bundle it up, right? Take the imposter syndrome. Take all of the little thoughts that you've had, like, I need to be this, or maybe I'm not good enough at that, or I'm not technical, or whatever it is. Take that ball. I want you to pull it out of yourself.

All right, okay, there's a beautiful little bag. It's right at your feet. Take that ball. Just gently put it in the bag, all right? It's there if you need it, okay? Those worries, if you want to walk out of the room, they're all there, but for this hour, we don't need it, okay?

We are social learners

Now I want us to do just another little exercise. It's been a couple days of being in our heads, so let's use our bodies more, a little embodied cognition. Everybody who was born between January and June, I want you to raise your hand.

So you're thinking, Kat, what on earth was that? We are social learners. And this might feel like a really silly demonstration, but it is actually something that you are incredible at. Human beings are incredible imitators. We can learn from other people readily. We do it pretty much immediately in our development. And the other thing, the other side of the social learning coin, is we are really good at transmitting that knowledge.

So I like to start with this kind of example, because I know in technology we like to think of ourselves as inventors. But from the perspective of cognitive science and psychology, it might be more accurate to say we are really good imitators. But we are flexibly creative imitators.

So, you know, if you've ever seen a toddler with an iPad, you know that human beings are incredible at mimicking the behaviors of other people and using those behaviors to interact with novel objects. Our minds can do that because we are incredibly skilled at watching other people and learning from them. In fact, social scientists believe this capacity for imitative learning, this social learning, propels human culture. This sort of learning and teaching cycle transmits innovations, and we build on them, and that is kind of what creates what we call cumulative culture.

And you can watch this unfold in technology. So a great example that I'd love to share comes from a project that used a data set from MathWorks. They hosted 14 years of programming competitions. Something really, really neat about the availability of data like this that we have now is that you can start to watch people's solutions get shared in real time, copied by others, and modified. This is something that there was a lot of speculation about decades ago when we were talking about how people write code, but we can actually see it in data sets like this.

And indeed, they, in analyzing this data, these researchers discovered it really did look like the cumulative culture patterns that we might imagine from social learning. So across, I think it was more than 21 million lines of code, 14 years, 19 different contests, they found these discernible patterns. And these might feel really accurate to your life, your career.

Successful code gets copied immediately, right? We can recognize other people's success, and we want to put it into practice. Then, solutions that are successful actually get improved in this sort of group-level process. There's a cool word for this. We like to call it cultural ratcheting. Picture a ratchet that tightens something. So people can adapt those solutions to a specific context. They get refined over time, and it's sort of a group-level phenomenon. Sometimes every individual doesn't actually realize that's what they're doing, but it's happening anyway, which is so beautiful and cool.

We see tweaks. Some people are doing little tweaks on the solutions, and we see leaps forward. Now, I think a lot of our discourse about technology focuses on the leaps, but it's important to notice both things are valuable, both things are needed. And also, most of the leaps fail. So most times somebody tries to approach a problem in a completely novel way, it doesn't work. But when it does work, it works really big. It means big payoffs. That last one is often where we center in our thinking about innovation. We think, God, I want that breakthrough. But again, I want to bring to your attention all of this stuff is part of the same process.

Social learning in real technical teams

I was really, really interested in how cumulative cultures show up in technology work. And with all of this knowledge from psychology in my back pocket, when I started working with software teams, I wasn't surprised at all to see that these social learning patterns go way beyond programming contests or in-the-lab studies. So I teamed up with an expert practitioner, my friend Ana Hevesi. Ana was an original community manager for Stack Overflow, a little website you may have heard of.

And one of my favorite things to do as a scientist is to take everything I know from the lab and try to pair up with the people who have really lived it. And together we kind of test it all, right? We say, am I right over here? Have I abstracted away the realness? Or does this validate what you really experience? So we wrote a paper on this, which you can send to your boss right now. And it's essentially our joint mission statement that says, you know, if you look at the real case studies, the real community phenomena across technology, you also see the critical role that social learning plays.

You see breakthroughs that do not have social learning included fail, have much more friction. We look at things like the experience that people have with on-ramps or not having on-ramps on Stack Overflow. And we look at technical shifts that push folks forward. And essentially I wanted to shine a light on this underappreciated, but I think essential building block for technology, which is the fact that our problem solving is really communal.

A very communal thing happened to me after I posted this paper. It's like a little preprint, and it's really nerdy. But at this point, nearly 7,000 folks have viewed it and sent it around to their workplaces. And I started getting these emails from engineers, including some of the original engineers at Stack Overflow, which was really cool for me, who said, thank you. This does match my lived experience.

the fact that our problem solving is really communal.

Disturbing signals about technologist well-being

So if all of this is happening, why do we also see really disturbing signals about how technologists feel in their day-to-day work lives? If we know the importance of social learning, are they feeling like that importance is being seen? What you're looking at here is data I collected in a research study. These are 465 engineering managers. And I run a scale that I really like that I developed called the Visibility and Value Scale.

So we ask folks, do you think that technical effort—not outcomes, right, which is really important—technical effort is seen in your organization? Is it valued? And as you can see, the majority of managers say yes, 88%. In this study, it's between 88% and 92% every time I ask this. Some managers wrote in in this study to describe just how much this was an important part of their job to them. But when you ask the people who report to those managers, only about 24% of their developers agree.

So in these technical organizations—I'm not the only person who has measured this—we find deep misalignment between the leadership and between whether the folks on the ground and the folks doing the work feel understood. We see other signals. So leading researchers have kind of sounded an alarm for the experience of technologists.

Greg Walton, who's a leading psychologist studying achievement and interventions, wrote a paper where they wrote, you know, folks who feel different in engineering, who feel like they don't match the norm, experience a chilly climate in which they feel unwelcome. Daniel Russo, a computer science professor, software researcher who studies resilience and technical teams, has written, in the midst of a rapidly changing global landscape, the profound impact of a developer's well-being on their productivity has become glaringly evident. Margaret Ann Story, also an influential software researcher at the University of Victoria, writes, in software engineering, instances of disruptive technologies are frequent.

We have this fragility that's happening to the human side of technology work. At the same time, the complexity of that work is increasing. There are these disruption events happening. This is a tough recipe. And I hear about this tough recipe and its impacts all the time.

So these are quotes from real developers in research interviews I conducted. One engineer told me, so much of my work just never sees the light of day. Someone said, we're building castles in our minds, and sometimes you wake up to an email that just demolishes something you were building for years. And another person put it really bluntly, I'm not convinced our leadership thinks we're people.

Yeah, that one always gets a sad laugh. I'm familiar with this sad laugh. I find this really, really poignant. And it really provoked a dilemma for me as a psychologist. Something like this question kept coming to my heart. How is it that we all can be producing this work that our world depends on, but it feels so hard for it to get seen? It feels so hard to be seen as humans.

I want you to also think about this conference right now. And how many times have you gone to a community like this and then gone back to your workplace with big plans, big inspirations, and sort of hit a brick wall? It's really hard to have a good culture as a technology person, right? As a data scientist, as a developer. And this is what I want to center the rest of our time around. Why is this so hard and what can we do about it? In particular, what's the psychology behind some of the reasons we end up with maladaptive strategies?

Learning debt and psychological affordances

I started working with technology teams by doing a learning science project. And this was such a fun and ridiculous thing that I did. I decided I was going to do a research study completely for free on my own, self-funded, when I was living in San Francisco. And I wanted it to be about developers learning. See, I had come into tech with a learning science background. And I was really curious what learning looked like on these technical teams.

So I started with a qualitative study where I interviewed 25 developers about their problem solving and their learning. I had them sit down with me. I sat across from them so I couldn't see their code, right, and actually work in front of me and do this thing that we call talk out loud. Which is a really fun exercise where you get people to sort of describe the decisions they're making as they're working. I focused on whether when people were solving bugs in their work and when they were ramping up into a new code base. Which is a really interesting psychological moment for people.

But the big takeaway that I came out of this study with was that developers are feeling this deep tension when it came to their learning needs. I named this tension learning debt, like technical debt. Learning debt is this thing that is accruing, like a damaging cycle for tech teams. And it functions something like this.

Developers go into their code writers, knowledge work people, they go into their work and they know that they need to do active learning. But if they are in an environment that discourages sharing that learning or having that be an open part of your job. Then they're kind of getting these messages all throughout that there's no support for you. They go into other processes. I pulled up code review here as an example. But all of the kinds of moments where they interact with their teams were brought up in these interviews. And in those kind of code reviews, sometimes a very important thing would happen. Which is that they would get a really snide negative message about learning.

So for example, maybe a senior engineer would say, you know, I'm not interested in like the things you tried that didn't work or don't bother me. A manager would say, you know, I can't adjust the timeline for you. Like learn on the evenings and the weekends. Those kinds of messages that essentially sent people back to their environment with the idea that it is not OK to show that I need to learn here. I still need to learn, but I cannot let on to other people that that's happening. That is learning debt, right? That tension that's happening.

This is just one way that I see bad psychology kind of creeping in and starting to chip away at our well-being. And this disturbed me a lot as a learning scientist. So I decided to go deeper. And the place where I went is an idea that I really love called psychological affordances. I started to suspect that to really understand and fix this type of pattern on teams. And I started to consult with engineering teams as well. So I was putting this into practice myself. We need to learn about the psychological affordances that our environments are giving technical people.

So what's a psychological affordance? Well, this is the picture of me kayaking off the coast of Kauai. And I never could have had this experience without the affordances that a kayak gave me. I can't swim that well or that far. But with the extension of the kayak, with the rowing, with everything that we had access to, I had these design affordances, right? They expanded my capacities. Social scientists also think about affordances. We think about the messages that are around people and what people get that defines what they think is possible for them.

In our environments, you can think of this working almost like a little scientific theory that you carry around with you. You might ask yourself, what does it mean to be a productive technologist here? Productive, successful developer in this place. And then we go out into our environments and we experiment. We conduct small, little experiments. We take the results of those as the answer to our questions. We do this all the time without even necessarily thinking about it. And then we calibrate our behaviors. This is how cultures get formed. And it's also how we set ourselves off onto certain adaptive or less adaptive paths.

Wanting to understand this cycle and how I could help people with it is one of the really big reasons that I became a psychologist. And even though this cycle presents us with some challenges, it also has a lot of hope in it. Because if we didn't have this kind of ongoing experimental cycle all the time, we could never change a culture that we're in. We are all as individuals creating the cultures that we're in all the time. So if we can intervene on parts of this cycle, that is our path to change.

The brains in jars model

But what answer to that question are we getting from the stereotypes that our cultures, societies, and workplaces hold about technical work? I really had to figure this out before I could figure out how to help these tech teams. And I have proposed an answer for you. I think that we are getting really bad psychology answers about technology work. One of the things that we know in social science is that when we as people don't have really clear, shared beliefs about how things work, we tend to default to deeply held stereotypes. And I think that I see these experiences and measure these in my research all the time. I have a name for it so that you can remember it. I call this the brains in jars model of software development.

My empirical research primarily focuses on beliefs about software development. But I would invite you to consider how this might have leaked into your fields, right? Because these are kind of pervasive stereotypes. So what is the brains in jars model? Picture those little jars in a sci-fi scene, right, where there's a detached pink brain and it's floating in jelly. And haven't you ever felt like that is how my boss thinks about me, right? So in this model, technical people are isolated individuals. They are just brains. So we only care about their cognition. We don't like feelings. We don't like mushy stuff. Psychological safety maybe we think is a cute buzzword that's like a nice to have, but it doesn't really have anything to do with technical problem solving. In fact, it actually might be opposed to technical problem solving in a lot of people's minds.

Brains in jars models do not have to define like an entire organization. I think of it more as like a set of stereotypes that can get activated for people. Now, there are three thinking traps that I look for as a psychologist when I work with engineering orgs. And when I see these traps, I think I've got a little bit of a brains in jars vibe going on here. OK, so I'll give you these three thinking traps. Watch for these.

The first one is you will see your organization fall into what I call brittle productivity. This is when we start to believe that the only way to produce the outcomes we want is the grind, the cramming all night. Nobody can actually do that. I'm sorry to break it to you. So what starts to happen is that people posture. They pretend that they can do that. You see it start to become about the performance of productivity. We pursue short term cycles of extremely hard effort. We disguise the breakdown. And ultimately, this is a really dangerous place for organizations because it looks kind of good. For kind of a long time, actually, because technical people are capable of doing this posturing in an overachieving sort of way.

But sometimes I tell leaders when they're consulting with me, I would prefer that we just saw your org look dysfunctional. Because I don't like it when you are all hitting your targets and people are burning out. That's really dangerous. It's difficult for you to detect. I had an engineer friend one time who announced to me. I know that I am a technical mind. I know that I was meant for this work because every day at every night at 3 a.m. I like jolt awake and I'm thinking about solving technical problems. This is clearly the only thing my brain wants to do. And I said, I'm sorry, babe. That's anxious rumination and it's not good.

The second thinking trap is called the chilly climate trap. So have you ever been in a tech place where it just feels kind of like you have to be cold? You're sort of supposed to hold back emotions all the time. You know, our bosses and colleagues maybe have this stereotype that, you know, the more efficient you are, the less you are emoting, the more technical you are. And it's kind of like a pervasive culture. It's hard to name because it's a really big thing. But people have called it a chilly climate.

My friend Titus Winters, who helps drive DevEx at Adobe and wrote the book Software Engineering at Google, recently gave a keynote. Great keynote at the C++ meeting. Highly recommend. And the whole keynote was about fear. Titus has led some of the greatest technical teams in the world and proposes that fear is one of the biggest challenges that we are facing in tech right now. And it's something critical for us to understand. Fear about using technical systems, inherent fear of doing cutting edge work, of the risk involved. I think this is kind of a shadow behind a lot of us.

The third thinking trap that marks Brains and Jars Organizations is probably my favorite one to beat up online. This is the lone genius trap. In this thinking trap, we say tech excellence is the exclusive provenance of a gifted few. A great example of this might be the 10X engineer or the 10X developer. If you've ever heard this kind of idea, that there are some individuals in technology who just dramatically outpace other ones. I've heard people talk about 10X data scientists as well. And the kind of thinking goes like, if those outliers are the people who really outproduce, then those are the ones who drive our real outcomes. Those are the only ones we should really care about and protect. Anything they do is fine because we're getting that 10X output from them.

The LABS model: developer thriving

But first, I want to tell you what psychology thinks, a different perspective about you and your work. When I founded the Dev Success Lab and gathered together a scrappy, interdisciplinary, very small little lab to specifically study technical teams, we decided to do a big lit review on what we should focus on. And this is what we came up with. These four factors, which I have spent a lot of my time working on, stand in great contrast to the brains in jars model.

We have learning culture, which is the experience of learning being celebrated, shared in your environment, seen as a good, productive, normal part of your life and of being a competent person. We have sense of agency. Agency is like this fundamental belief we have that when we do something, a result will happen. Imagine times that you have had something come in the middle of that connection. It's very disempowering, very frightening for people. We have self-advocacy and motivation. We think of self-advocacy and psychology as kind of like this psychological engine, this powerhouse that keeps you going over friction.

So self-advocacy is like this positive belief that even if an unexpected thing comes to me and I don't know how to solve it and I'm feeling the intensity of that, you can say to yourself, I know I can do this. I believe in myself. You have this kind of core. Sense of belonging, finally, is one of my favorite, favorite psychological measures. We can capture this kind of big construct around, do I truly believe I belong here? So that cycle of question asking, I ask, do I belong in this? Does someone like me belong in this environment? The answer more often than not comes back as yes. Then you have a high sense of belonging.

So we recruited about 1,282 technologists into an open science project. And we bundled these factors into something that we call the LABS model. My team told me I had to make it an acronym so that somebody in tech would remember it. So you can remember that. And we measured developers' self-reported productivity and found a persistent, statistically significant association. This is just a simple visualization of that. You can check out our paper for all of the models.

And we are not the only people who have found this, that those teams in those better environments win out on productivity. Lots of software research that we cite in our report also underlies this connection. We find that this works across a lot of diverse contexts. So something I'm really proud of in this project is that we did not just stick with your hundred friends at Microsoft. No offense to them. But we recruited across the world. We recruited across more than 12 industries. We looked at this connection across many, many demographics. And we found that when you have what we call developer thriving, these four factors, it's a pretty good signal for being in a place where you can be a more productive person.

The performance paradox

So we've observed this connection in the world. I wanted to learn a lot more about what can start to help us when we are stuck in those brains in jars cycles. And one inspiration I took from this was a conversation that I had, a really deep, ongoing conversation with a friend of mine. This is not his real name, but he volunteered his story for this. So we'll call him Sean.

My friend Sean was a really high achiever his whole life. He discovered coding in high school. Loved it. Excelled through college. Felt like he was living the dream. He also said he was living his parents' dream. Got a very high-paying developer job at a top tech company. And he was just flying. He loved coding. He loved the puzzles of programming. He hit all of his goals. Promotion, promotion. He got his staff promotion, which he was really excited about. And then he woke up like a cloud had come over him. Like where the motivation that he used to feel was just gone. It wasn't there anymore.

Sean started to realize there was this big tension he had felt his whole life between having goals of performing. Like demonstrating his competence and how excellent he was in front of people. And goals of mastery. This matches on to two kind of dueling concepts that psychologists who study people's beliefs in school and achievement have found. We can divide people into kind of these two camps a lot of the time. Sean had really fallen into this performance camp his whole life and it served him well. He wanted to get the promotion. He wanted to get the A's. He wanted his parents to be proud of him. All of these are very worthy survival goals.

However, the thing that motivated Sean, the thing that motivates a lot of us, tends to fall in the mastery camp. Mastery is when we are seeking learning for its own sake. So when you think about, if you ever heard the word intrinsic motivation, what is intrinsic about it is that the doing the thing itself becomes kind of the goal. So you're rewarded for it immediately. The journey is the destination. All of those beautiful sort of Zen things that people say actually have a deep grain of psychological truth to it. Because that is a much more sustainable source for our motivation.

Now something interesting happens if you get unbalanced between these two things. And I think this is what was happening for my friend Sean. He started to get so locked in on the performance goals when he became a staff engineer. He felt in his mind, it wasn't even really a conscious thought, but he thought, I can never mess up in front of my team again. This is what we call in psychology an avoidance strategy. It feels great. It does not work.

So the more you say, OK, I'm going to achieve my performance goal by never putting myself in a position where I could make a mistake. So you stop going to the optional team meetings. You stop mentoring people. You actually stop doing the things that grow your skills. You don't get those small failures on the way to the big leaps that we talked about. You are boxing yourself in and you're going to become a lower performer at the end of the day. This is kind of a big paradox. I actually described it to my friend Sean as what I call the performance paradox.

Because we know a lot about why this happens. We can explore it from many angles. So, for instance, like in our cognition, we are pretty bad evaluators of our in the moment performance. We tend to think that performance is the same thing as learning. But famously, it is really not. Actually, you want to see people make a lot of mistakes, kind of get worse on the path to getting better. That is normal, sustainable learning.

So the more you see people fixate on demonstrating explicit performance, the more you see that they get really fragile under pressure. So that's kind of like that high achieving kid who can't tolerate getting a low grade. The first time they get it, they sort of lose all their strategies. You don't want to be like that person. And this is damaging on the scale of our communities. So psychologists have measured the beliefs that an entire field or a community might have. And they find that the more of professional field or a discipline in the world focuses on this kind of performance, focuses on brilliance, which we'll dive more into in a second. The less that sets that field up for success. The field gets worse at recognizing people who actually have high potential and it gets more fragile.

So the performance paradox. I want to give you just a couple little handles. If this is I've seen some like sad faces. All right. And I want to tell you this is OK. This is like a normal part of being a hardworking person. The key is to think about your mindset and think about your context. And you can, like my friend Sean, take action on this.

So we had a couple moments, coffee chats where we sat down by the end of these chats. Sean had challenged himself to start doing mentoring with his team again to show some vulnerability in front of them. And he went from being a person who really was hiding out, not having any connection at work, to being essentially like the learning leader on his team. And he was way happier.

The way I think about breaking down these dilemmas is this handy metaphor that I really like from Walton and Yeager called the seed in the soil metaphor. This helps us think about what psychologists call the mindset of the individual, but also the context of the environment. So you can think about the soil as kind of the adaptive social context that we need. That affords us the opportunity to put our good beliefs into practice. That hypothesis testing that we were talking about. But we also need good seeds. So you as an individual might be working on the seed part. These are our adaptive psychological beliefs that create kind of more resilient behavior goals. So focusing more on mastery might be this good seed.

But then you also need a really beautiful strong soil. Like if you have a boss who hates the fact that you're trying to focus on a mastery goal, that is going to really foreclose on how much you can exercise that belief. Leaders in the room, your job is soil. Individual contributors, your future leaders. So think about the soil, but your job is the seed. That's what I would say about that.

If you need something stronger to hit your organization over the head with, I have created a law. People love to create really silly laws in software. So I decided I needed one. This is Kat's law. The more a software team focuses on demonstrating short-term performance in their environment, the less they are able to protect the foundation on which long-term performance depends.

The more a software team focuses on demonstrating short-term performance in their environment, the less they are able to protect the foundation on which long-term performance depends.

AI skill threat and the lone genius stereotype

So we have learned a lot now about how in general really good psychology leads to better productivity. We have evidence against that brains in jars stereotype about you as technical people. But I want to talk about situations that turn up the dial of pressure. Because a lot of times I get this kind of feedback when I'm talking to especially engineering leaders. OK, Kat. All right. This sounds great. We will work on it when nothing is happening and everything is great, which is never right. So folks say this sounds nice, but my team's under pressure. We have to deliver. We have to go to burnout strategies.

All right. I'm an empiricist. I said, let's test it. We know something about stress in psychology. I think of these as threat experiences. When you are experiencing a moment of threat as a person, questions might be coming up for you like, am I going to lose something? Am I going to be evaluated unfairly? Other kinds of questions that tap into those core psychological affordances like agency. Am I going to lose control? Will I have a choice in those moments? Threat puts us in like a different state because it really is a moment where our priorities shift. We're trying to protect ourselves. We're trying to survive.

And I know it's easy to think about this if you're about to be hit by a car. But if you deeply value technical work, you can also be in a state of threat about it. If you think someone's going to take something away from you. Also, throughout our lives, we are always adapting our beliefs. We're always doing what what we call contextual sense making. So even though you maybe learn to be the person in tech and you have the title and you have, you know, it's actually not as fixed as our organizations make it out to be. Right. You're always asking, well, what does it mean now? What does it mean this quarter? But perhaps a moment in tech is coming up to you in terms of threat, in terms of sense making. And that moment is probably AI.

My research lab decided we had to study AI. I really was a bit reluctant to do it because it felt like such a loaded experience. The first thing I thought of when I started listening to how technical people were talking about AI was this experience of threat. So we decided to tackle this in a project we named the new developer. And what we wanted to do was focus on the human. In particular, what human environments help technical people thrive when they are grappling with the changes that might be introduced by AI.

So we needed a couple tools in our arsenal to do this. Remember, I told you we were like a scrappy multidisciplinary team inventing new methods that did not really exist for these kinds of questions. So we did a few things. We knew we needed to recruit real teams. There's a lot of sandboxing going on around AI. A lot of kind of let's bring my five friends into the lab. We wanted to get into the real world. We wanted to watch teams that were doing real AI adoption. Not the hype, but actually pulling those tools in.

We knew to do this. We needed to recruit inclusively. We needed to analyze intersectionally. This was a really big deal to me. I don't know if you consume these large surveys like the Stack Overflow survey or initiatives like that. As a research leader and a social scientist, I had always noticed that their representation of women hovered around 3 or 4 percent, which does not reflect the demographics of technical organizations. We attempted to reflect those demographics. We achieved over 30 percent women in this study. And that took a lot of me posting in forums.

We also needed to develop new measures. We have released those publicly. So you can take this also to your boss and say let's measure these things. And we pre-registered all of these questions. You can explore that. You can see that some of my hypotheses in this study, in fact, were not met, which is also really interesting.

But the ones that were, we hypothesized that we would find AI skill threat, what I've named AI skill threat. This is a pervasive feeling of stress, anxiety, fear being in that threat state when we ask people to imagine generative AI as changing their job, changing the future of technical work. As you can see, about half of the people, it's like 43 to 45 percent, depending on which questions you're bundling here, folks show up with this AI skill threat. So it's really pretty pervasive.

So we wanted to ask, what is it that seems to raise or lower the temperature on AI skill threat? We had two very strong educated hunches. We like to glorify these as scientists by calling them hypotheses. But our research-backed hunches were, one of the things that was going to raise the temperature on AI skill threat, really push it into the forefront for people, is that lone genius stereotype.

So there's a couple ways that social scientists have looked at this type of belief, and we focused on two of them. One is something we call brilliance beliefs. Do you believe that ultimately the most successful software development is driven by an innate property of brilliance? You need to be brilliant to be successful here. Now, when folks tend to believe this, they tend to do something else. Something that we call a technical contest.

Have you ever been in a conversation where someone's, I don't know, I don't even know how to like, because I try not to do this in my life, so I don't actually know how to imitate it. But has someone ever been like, oh, you don't know that keyboard shortcut? You have the wrong stickers on your laptop. So we naturally kind of like try to assess people, try to sort of understand people's competence. But when you are in a culture that really explicitly pits people against each other, people kind of see it as their job to decide if you're good enough, if you're technical enough, which is the same thing as good in a lot of these teams. We call that a technical contest culture, and it is terrible to experience.

So these are linked because a lot of times people think only a few people can be brilliant. I better make sure I'm one of them, so I better win all these contests. You kind of end up like in this punching match all the time with everybody around you every moment. You're never safe.

What is the contrast to this? We tried again to tap into those drivers of cumulative culture. So we measured learning culture in particular. We measured belonging in particular. We see these beliefs strongly negatively correlate with the brilliance and the, you know, the technical contest that we thought this is a pretty good target. I bet that these things are going to be putting people in a different mindset about themselves and about technical work. And indeed, that is exactly what we found.

So across more than 5,000 developers and their managers, across, again, an inclusive sample of more than 12 industries, we find that if you are on an environment that is much more like this lone genius culture, you are twice as likely to have AI skill threat. Now, it doesn't vanish completely in cumulative cultures because it's a big world and lots of things are affecting our minds, right? But it significantly decreases it. Imagine, like, a drug that cuts, you know, your rates in half like this. We find this to be a really exciting signal.

And we also find for those bosses who don't care about feelings, these teams are more productive. They say that I'm more productive as an individual. They also say my team as a whole is more effective, which I highly recommend, by the way, if you are at an org that runs, like, DevEx or productivity studies, include this measure. Team effectiveness in our studies is not the same as individual productivity, and it's really fun to ask about both things.

Debunking the 10x developer

Our lab decided we were going to get even more ambitious. We wanted to ask, what can we learn on the scale of software organizations? Now, at this point in our research, I felt really confident that all those lone genius explanations were not an accurate way to describe how software development worked. But I wanted to put it to the test. Now, I'm excited to share this with you all because I know how much you appreciate data. And we were very excited to put this to an extremely datafied test.

So imagine that 10x developer idea, right, that comes sort of in the brains of JAR's model. If we thought that there are a few people who are just such dramatic outliers in production, what would we expect to find? Well, folks in software have made this argument a lot. They've said there are 10x developers. They are faster than everybody else. That's the primary way that it's been measured.

My team had access to something that we call cycle time. So if you're at an organization that puts things into tickets, you might know this kind of beast. But what it essentially is is that technical folks are supposed to mark the start and the end of a cycle of work. And we had access to a tremendous amount of software metrics data, which you can go read, all the complexity of that. We know that it represents tons and tons of different kinds of work, right? But we had access to this incredible sample, and we decided to ask ourselves, across all of this cycle time, can we detect these outliers? Almost like an adversarial study of our own beliefs that they wouldn't be there.

So we looked at over 55,000 cycle time data points, over 11,000 individuals, 216 organizations. Our lead author was John Florney, did an incredible job on this, our methodologist and statistician in my lab. And there's also a great GitHub repo for this paper, so you can get all the code.

What did we find? Were those 10x developers present? They were not. At the scale of thousands of developers doing thousands of software tasks, we did not find this proof. What you are looking at here, because it's so much data, you know I can't visualize it. What you're looking at here is a visualization that takes some randomly sampled individuals in our data set. And I think the first thing that probably pops out is noise, right? Or just variation, probably not noise, just meaningful variation that doesn't correspond to reliable, consistent individual differences.

We did find some interesting effects in this study. They mostly probably match what you already know. So the effect shown here, for instance, is if you are getting a little bit more time to focus on coding in the week, your cycle times get shorter. That is represented in color here. So a little bit of clustering of color above and below the line. But you can also probably see that's even not a great predictor for all of this variation, right?

We propose in our study the places that leaders need to look is not individual explanations, but organizational explanations, right? We need to get clearer about types of work and stop relying on very simplistic benchmarks. We propose that, and I've seen a lot of orgs that I've worked with do this. If you take really oversimplified averages, you say, my team needs to hit this benchmark average of cycle time. If you're below it, you're bad. That is essentially bullshit. In fact, for even a single individual, their own past average cycle time is not a good predictor of their future average cycle time. This work is really hard, you all. It is really variable.

It's also not good between organizations. So even organizations vary heavily from each other. This is a map of distributions of cycle time across our more than 200 organizations. So if you are a leader listening to this and people are telling you you need to compare to other organizations, just know that comparison might be fairly meaningless, right? Your average, it might be very different from someone. If you don't know what drives kind of the variation that we see, then where are you going to get?

But at the end of the day, I felt like we were kind of just nailing the hammer into the coffin of the 10x developer idea, and that gave me a lot of joy. So I think the 10x developer thing is wrong. Other research you can bring to bear on this is really, really interesting that I'll point you to. We have a very strong meritocracy bias in especially westernized cultures. We like to think about individual explanations. The more we think about those explanations, the more cognitively accessible they are to us.

So two people could look at the same data, and one of them is really deep in this meritocracy bias, and all that ever comes up into their mind are individual explanations. And they go to their next leadership meeting, and they say, well, all that's ever been