A Deep Dive into Developer Experience Surveys

Developer experience describes the experience that developers have when developing software for your companies and teams. It’s a wide term with lots of components. Developer tooling is one part of it, but not everything. Processes, policies, architecture, security practices, documentation, habits, team culture, cross-functional relationships, and even laptops and hardware make up developer experience.

Discussions about developer experience (sometimes abbreviated as DX, like UX) have become prominent especially in the last few years, as companies try to maximise impact of their developer teams while also trying to prevent developer turnover. Exploring DX goes beyond productivity metrics, and helps you get a health check on the state of your teams. It can answer the question “do they feel like they can get their work done easily?” 

High developer experience correlates with other positive team traits such as high motivation, productivity, attrition, and overall well-being. Companies that have a great developer experience should expect to see teams quickly iterating without friction, more frequent use of best practices like refactoring, more experimentation, and high-impact outcomes.

On the other side, poor developer experience leads to lower motivation, higher attrition, lower perceived productivity, and poorer business outcomes.

DX has become so important to organisations that we’re starting to see whole teams dedicated to developer experience (outside of Big Corps who have had the resources to do this for years). Two great examples are Snyk and Stripe, who both have shared a fair bit about their experiences.

But, you don’t need a dedicated team to start investigating and improving developer experience. One common way that teams begin the conversation around developer experience is by putting out a developer experience survey (you might also see them called developer satisfaction or developer engagement surveys).

This article will go deep into a few key topics to consider when creating and analysing developer satisfaction surveys.

  • Techniques to measure developer experience

  • Anonymity and demographic data collection

  • Target audience and frequency of surveys

  • Tooling

  • My survey template and sample questions

  • What to do with the results

Measuring developer experience

A survey isn’t the only way to measure developer experience, but it’s a common way to get started. This article from Abi Noda breaks down the different ways that teams can get a pulse on developer experience and satisfaction. 

Office hours, 1-1s, retrospectives, or other team meetings can give you insight into how people are feeling.

For your own measurements, the decisions you need to make are a) how much variability can you tolerate in the answers while also getting a clear signal? And b) how important is tracking progress over time?

For these two reasons, surveys are an attractive option. You control the question set and response types, and you can easily send out the survey multiple times to track changes in the results.

But surveys don’t often leave room for open-ended answers, meaning you might miss out on something important. When putting together a survey, I first look at the qualitative, messy data from team interactions like retrospectives or office hours. From there, my team is already pointing me in a direction, and I will adapt questions based on what those conversations have already told me. Another technique you may way to try is using a standard set of questions at first, and letting those questions open up deeper conversations. I’ll share my own question set later in this article.

Anonymous or not? What about segmenting by demographics?

Like many other employee engagement surveys, I recommend that your developer surveys are anonymous. Anonymous surveys have many advantages, the primary one being that participation is generally higher. There are some drawbacks as well; specifically, that responses can be more extreme (both positive or negative) because there’s no attribution or accountability for how someone answers.

Since a developer experience survey is designed to capture emotions and sentiment, the drawback of more extreme answers isn’t a top concern for me. But, people opting out of the survey because they are afraid of retaliation would be counterproductive, which is why I make the recommendation to keep these anonymous.

Collecting demographic information might be appealing to you in order to segment responses based on things like tenure, years of experience, gender, and location. If team sizes are small, you may not be able to segment on demographic information in a way that preserves anonymity. Most engagement survey tools will only show aggregated metrics for teams with more than 3, sometimes 5, members. This ensures that answers likely can’t be attributed to individuals. With small numbers, it’s just too easy to tell who wrote something, which breaches the trust of the anonymous survey (like the anonymous surveys you filled out in grade school, not realising that your teacher knows everyone's handwriting!).

If you have a large enough team, collecting some demographic data in the survey might allow you to interpret the results more accurately. Keep in mind that your company may have specific guidelines about surveying employees that you’re obligated to follow. In any case, be transparent about what data is collected, if and how demographic data is collected and used, and how responses will be shared.

Picking the right tools

There’s a lot of flexibility when it comes to picking a tool to run the survey. If you’ve never run a survey before, I recommend first using Google Forms or something similar to roll your own survey. Once you’re confident the data is helpful to you and your team, you might consider buying a solution that will take care of the logistics for you. There are two options here:

  • Use your existing employee engagement survey tool, if you have one (Culture Amp, Lattice, Officevibe, etc). Many of these tools support custom surveys, so you can add your own developer experience questions and send out pulse surveys on a schedule.

  • Use a tool like DX, which is a surveying tool specifically for developer experience. DX will not just handle the logistics of the surveys, but also provides you with questions and benchmarks your data against the industry. 

If you don’t have an employee surveying tool yet, I wouldn’t buy a generic surveying tool just for the purpose of developer experience surveys. I’d go right to DX (or wait a few months until there are some competitors to DX on the market).

Target audience and survey frequency

You want enough time to pass to make changes based on previous responses, but also notice that you made changes. Every 8 weeks is a good cadence for surveys, but even going up to once per quarter is fine. Running the survey twice or year or less doesn’t provide enough data to notice meaningful trends, or track progress.

Beware of survey fatigue. Your team will become frustrated if the surveys are so frequent that they can’t see anything happening as a result of their participation, and engagement will drop off.

The name “developer experience survey” suggests that this is a survey for developers, which is mostly accurate. Any individual contributor within your engineering organisation is the target audience for these surveys. If you have specialised engineering functions such as data engineering, design engineering, or QA, make sure that the questions apply to them, or create a separate survey that’s tailored to their needs. 

Product managers, designers, or technical docs writers can participate in these surveys if they are a cross-functional partner in development. In the survey template I’ve shared below, the questions in Project Planning, Team Processes as well as Perceived Productivity categories aren’t specific to engineering only, and can be answered by other members of the product development team. Remember the point about demographic information above: if you only have 1 PM and 1 designer, it will be hard to maintain their anonymity if you choose this approach. But, if team processes like sprint ceremonies are decided by a cross-functional team, it is advisable to give all of those teams a chance to give feedback on the processes before deciding to make changes.

Survey Template and Sample Questions

I’ve collected my most frequently used questions in this doc. You can copy/paste these into a survey tool of your choice.

I designed this survey using the SPACE framework as guidance, along with my experience working with over 100 leaders on team performance. Questions are grouped into categories that are relevant for most engineering teams.

These are:

  • Efficiency, Focus Time

  • Production, Monitoring

  • Deployments and Releases

  • Collaboration and Knowledge

  • Codebase Health

  • Project Planning, Team Processes

  • Tradeoffs

  • Perceived Productivity

There’s also an Overall Satisfaction category that includes broad questions about satisfaction.
While the categories were rather straightforward based on research and my own experience, writing questions is harder than it appears on the surface. Just like designing software, the way you design a survey will influence how people use it.

Anything designed by a human being is going to have a bit of bias built into it. Some ways that bias shows up in surveys like this can be

  • Unequal weighting of categories

  • Complicated questions structures which can be confusing to people who don’t have English as a first language

  • Framing some categories with positive questions (“I’m satisfied with…”) and some with negative questions (“We don’t do enough…”) which can set a tone for the question and result in different responses, or confuse survey participants.

I’ve controlled for unequal weighting by targeting 3-4 questions in each category. Complicated English has been addressed by having a handful of non-native English speakers proofread the questions.

There are groups of questions that are coded negatively, meaning that the question is designed to get the respondent to agree to a negative statement. This is intentional, as a way to provide some checks and balances across all of the questions.

For example, “I feel like a lot of my time gets wasted.” This question balances out other statements like “Our team processes are efficient” and “Our meetings are effective and useful.” One interesting thing that you might notice in your own results is that your team might respond favourably to those two questions (giving the impression that meetings and processes are efficient) but then also agree that a lot of their time gets wasted. 

So, did your team agree that these processes and meetings were efficient because the question was framed positively, did they agree that time was wasted because the question was negative, or is there something other than meetings and processes (slow CI/CD?) that actually does waste their time? This is where you need to get curious, and talk about the results with your team.

Look through the questions yourself and spot the questions that go together (they’re often in different categories). This will help you when looking at your team’s results.

Discussing survey results and following up

Before you send out a developer experience survey, have a plan in place to use the results, and share it with your team. Being clear about what happens next will encourage people to participate, because they know they aren’t just throwing their opinions into a black hole. Plus, it helps you plan ahead to create space for both the conversations and follow-up actions that will come out of a survey like this. 

If you plan on putting out a survey just because you’re curious about the results but don’t plan on adjusting roadmaps to accommodate any new work that will come out of the survey: don’t. Your team will be frustrated, they’ll lose trust in you, and it’s a waste of time for everybody.

The results from this survey will serve as a benchmarking baseline for your team. From here, you can do two things: benchmark against yourself, or benchmark against the industry.

Benchmarking against yourselves is more accessible to most teams. With this technique, you’re looking at your team's results and tapping into the experiences of the team to figure out what interventions would have the biggest impact for them. 

Here’s what it looks like in practice: your survey goes out and results come back a week later. You share the results with your team either live or via video/slides. Then, you set up either a meeting or an async conversion for people to weigh in about what should be prioritised. You won’t be able to improve everything that comes back on the survey, but the results should point you in some direction.

For example, let’s say that your team had a very low score when it came to satisfaction with CI/CD: 3.5/10 on average. As a group, you come up with possible interventions and plan time for those projects. The next survey goes out 6 weeks later, and you’ve brought that score up to 6/10. Still a ways to go, but some good progress! (Note: If you need help here, I have a course on measuring team performance and offer private workshops on improving developer experience.)

Here are some sample results. In this case, it looks like you’ve got two groups: one that’s pretty happy with things, and one that feels pain. Just looking at average scores won’t give you that insight; you need to go a bit deeper into your analysis, and then get curious about why.

Benchmarking against the industry will give you more data points when it comes to making a decision about what to prioritise. It doesn’t replace the experiences and opinions of your team, but can help you contextualise your results based on how other teams are doing. The tricky part here is that you need to have access to industry benchmarking data, which can be tough. 

DORA metrics are a widely used set of benchmarking data, but they don’t often correlate 1-1 to the questions that typically appear on developer experience surveys. But, they can still help you better understand your results. For example, if the survey question “I’m satisfied with the speed and reliability of our CI/CD tooling” comes back with a very low score, and you see that your team is a “medium” performer according to DORA (find out here), you’ll have some clear indication on what to work on.

Generally, surveying tools (both generic tools as well as specialised tools like DX) will provide benchmarking data as part of the product. I’m not aware of a better place to get developer satisfaction and developer experience benchmarking data than DX right now, unless you want to sift through hundreds of academic papers yourself.

Part of the decision to include industry benchmarking data will be financial, and part of it depends on what outcomes you’re looking for. Again, industry benchmarking data does not replace the experiences and opinions of your team; it supplements them. I’ve worked with plenty of teams who do not have access to large amounts of industry benchmarking data and rely solely on their own data, and they’ve made some great progress. Other teams have chosen to get access to industry data and have used it to make prioritisation decisions.

When you send out the next survey in 8 or 12 weeks, a goal is that team members notice changes happening based on their responses, and that the scores improve. Include this question in all future surveys: “I've noticed changes based on the feedback from the last survey.”

TL;DR

  • Surveys are a common way to get a sense of developer experience and satisfaction on your teams.

  • These surveys should be anonymous, and go out every 8-12 weeks. Adjust this based on other surveys your team might participate in.

  • These surveys are designed for individual contributor developers (no managers), but including product, design, or other functions might be appropriate.

  • Don’t collect demographic information if your team size is small. It compromises anonymity.

  • The easiest way to get started is by creating your own survey using Google Forms. You can use my questions. Later, you might consider paying for a custom tool.

  • Make a plan to share and discuss the data, and inform your team about it before the survey goes out. This encourages them to participate, and gives them confidence that their responses will influence change.

  • Discuss the results with your team and make a plan. You might consider using industry benchmark data to supplement your team’s responses.

  • Send out follow up surveys.


Note: I mention quite a few tools in this article. I don’t do sponsored content, and I don’t receive any referral bonuses from any tools or companies that come up in this article.

Previous
Previous

How Useful Are Free Salary Benchmarking Reports?

Next
Next

Competing for Talent as a Startup