Recent Articles

Laura Tacho Laura Tacho

Three Unexpected Lessons from Rolling Out the SPACE Framework at Scale

What should you consider when rolling out the SPACE framework to your whole engineering organisation?

Through both my course on engineering team metrics as well as the engineering leaders I support through coaching, I’ve worked with over 200 leaders who are using the SPACE framework and DORA metrics to answer questions about their team. Some of these leaders are responsible for big teams – in the 100s or more – and it’s been interesting to compare their experiences rolling out SPACE at scale to a startup who is using SPACE and DORA from almost day one. 

You don’t need to explain the intricacies of the SPACE framework to everyone in your organisation before you start using it.

Trying to educate every single developer about the SPACE framework, DORA metrics, and why they are used is – simply put – a waste of time. DORA is relatively straightforward and generally easily understood after a quick read of a decently high-level blog post. But the SPACE framework is nuanced and very open-ended. It’s not a list of metrics like DORA, but rather a framework that still leaves the choices up to your organisation.

Some leaders hesitate to get started with SPACE before providing enough explanation and training to their teams of individual contributor engineers. This really stalls progress. While I definitely agree that an understanding and curiosity about SPACE can be beneficial for everyone in an engineering organisation, it’s more important for the decision makers, not the engineers who are feeding data into the system. 

What’s most important for engineers is why the metrics were picked, who sees them, and what happens when the targets are met or missed.

Instead of approaching the explanation by starting with the SPACE framework, let it be a foundational note that gives context and credibility. 

Avoid this:

“We’re implementing the SPACE framework. Here’s an explanation of the framework and this is how we’re going to use it…”

And aim for this instead:

“We’re using metrics to make better decisions about our teams. We’re starting with these metrics. They will be seen by X, Y, and Z groups. Expect A and B to happen if we miss the metrics. We chose these metrics based on our company needs, and also by using the SPACE framework, which is a body of in-depth research about using metrics on engineering teams.”


Survey fatigue is real.

A bigger company usually means more processes. Some startups choose to send out employee satisfaction surveys from early on (think Lattice, Culture Amp, etc), but it’s almost a guarantee at larger companies. 

It’s critical to think about the frequency that an engineer receives a survey from your company as a whole, not just how often you are sending out a developer experience survey aligned to the SPACE framework. If your company’s employee engagement survey is going out every six weeks and then you’re adding in another developer survey every 6 weeks, there will be some months where your team is getting two surveys – which might look and feel pretty similar – from your company.

I’ve even seen engagement surveys at the company level going out every fortnight, meaning that some weeks, people get two surveys to fill out.  Don’t be surprised if the engagement is low in this scenario. It’s a lot.

Aside from frequency, another antidote to low engagement is to make it clear how and when your team members should expect to see changes based on their responses to the survey. 

  • Make a plan to share and discuss the data, and inform your team about it before the survey goes out. This encourages them to participate, and gives them confidence that their responses will influence change.

  • Discuss the results with your team and make a plan. You might consider using industry benchmark data to supplement your team’s responses.

  • Send out follow up surveys.

Things get busy, and often things like developer experience get deprioritised because they’re not perceived as being on the critical path toward revenue. But as soon as you stop making it beneficial to your team to participate, they’ll stop sharing their opinions, and your surveys will get thrown into the ignore pile like so many other employee engagement surveys.

Phishing training really works!

Perhaps the most surprising thing was the sheer amount of change management and communication that’s required in order to roll out these kinds of organisation changes at scale. 

  • Where are the metrics?

  • Why were they picked?

  • When do we talk about them?

  • Who collects the data?

  • Who has access to them?

  • When should they make an appearance in decision making activities?

These are all questions that need to be answered. Beyond that, you need to form habits around them. And sometimes introducing a new habit into a larger organisation is like trying to steer a cruise ship with a spoon. 


I was also surprised to hear anecdotes from several managers at different companies that some of their team members didn’t respond to the developer experience surveys because they thought it was a phishing attempt. This even was after organisation-wide emails or Slack messages that outlined the process for the surveys.

The lesson here is to make it personal. Even if you’re sending out organisation-wide messages about these changes, it’s critical for each manager to be equipped to discuss the details with their teams in a smaller group where the impact feels more direct.

Read More
Laura Tacho Laura Tacho

Emotional Reactions During Performance Reviews

We follow The Rule of No Surprises to avoid unexpected emotional reactions during performance review conversations. Still, they can happen. Here’s what to do.

We follow The Rule of No Surprises to avoid unexpected emotional reactions during performance review conversations. Still, they can happen.

When they do, it's not your job to control or fix someone's emotions. It is your job to listen and reassure them that you hear them, even if you don't agree with their reaction.

An emotionally dysregulated person usually can't access the logical parts of their brain. Quite frankly, responding to an emotional reaction with a logical argument is likely to make them more aggravated.

Here's an easy way to remember this:

  • Respond to emotion with emotion

  • Respond to logic with logic

Mixing up logic and emotion will almost never get you the result you're looking for.

However, there are limits here. You don't have to tolerate harassment, verbal abuse, or other kinds of disrespect.

Respond to emotion with emotion

This doesn't mean that you should also get emotional. In fact, it's kind of the opposite. The best course of action is for you to keep your cool. What you do need to do is acknowledge their emotions. Otherwise, they leave the conversation not only feeling bad about the feedback, but also feeling like their manager doesn't listen to them. You don't need to agree with them.

  • "You really feel this is unfair feedback."

  • "I acknowledge that you're disappointed that you didn't get the promotion we had been talking about."

  • "I can see it's rough for you to hear this feedback."

These negative emotional reactions can have an impact on you, too. After all, you've just ruined someone's day, and it doesn't feel great. You need to take care of your own mental health and get in the right headspace for these conversations. Here's a bit of a silly visualisation that helps me: you are holding a box. I literally imagine that I'm carrying a box. The box is a safe place for someone to express their emotions. As soon as our conversation is over, I close the box. I leave it somewhere. I don't carry it with me. This both helps me keep a level head during the conversation (their feelings are just going into the box, after all) as well as reinforce that I'm not responsible for someone else's emotional response.

Get something to drink

In the heat of the moment, it might become obvious to you that the person needs some time to collect themselves. Chances are, they won't be able to realise this. After all, an emotionally dysregulated person will not be able to access their logical brain to recognise that they need a break.

I say something like "I'm going to grab something to drink. I'm going to turn my camera off and I'll be back in a couple minutes."

Hopefully, after a bit of a time out, both of you come back with a mindset more focused on productive conversation.

Ending a call

If that doesn't happen, it's okay to end the call. I have done this immediately in the cases of name calling, unfair accusations, other verbal abuse, or if it becomes abundantly clear that the conversation is not heading anywhere.

Ending a call doesn't mean that the conversation won't happen. It just means that it won't happen right now.

Here's my script:

"I think we've reached the end of productive conversation here. I'm going to end the call, and I'll get in touch with you later today (tomorrow/on Monday/etc) to schedule a time to finish this conversation."

Read More
Laura Tacho Laura Tacho

Setting Performance Goals for Low, Mid, and High Performing Engineers

What’s the right number of goals, and how should your goal-setting strategy differ for low, mid, and high performers?

Performance goals need to find the balance between an individual's own goals and the goals of your company.

Let's get the uncomfortable truth out of the way: your business won't be able to provide all of the growth opportunities that your team members are looking for. This is especially true for higher performers, especially at more senior levels. Leadership opportunities become more scarce the higher up you are. We’ll cover ways to set goals for higher performers in a section later in this article.

How many goals should each team member set?

As a general rule, fewer goals mean a higher chance of success. I think about it in terms of a "goal budget." Each person has 3 goal points. This could translate into three small goals (1 point each) or one big goal (3 points). A small goal is something that is fairly transactional (like developing an architecture document that can be done in a couple weeks), where medium and large goals might take months to a quarter to acheive.

SMART goals

It's important that any goal is realistic. The SMART goal framework can help you here:

  • Specific: the expectation should be clear.

  • Measurable: there's data or evidence to show if the person is on track.

  • Attainable: it's possible to achieve in the given time frame.

  • Relevant: it relates to performance objectives, business objectives, and the person's career objectives.

  • Time-based: it has an end state.

Just like with setting expectations, goals need to be clear and specific. A vague goal like "spend more time on technical strategy" won't pass the SMART test. But "Create, socialise, and present a technical strategy document about migrating to Kafka before the end of November" definitely does.

Setting goals for low performers

In some ways, this the the most straightforward scenario for goal setting. There are clear deficits that need to be corrected, so the goals need to align with those areas.

It's important for you to deeply understand the reason for low performance: is it a lack of capability where the person can be trained to learn new skills, or is it that the person does not understand expectations, and therefore does not rise to meet them?

In the first case of skills and capability being a limiting factor, it's important to pay attention to the attainable characteristic of the goal. Will they be able to level up in the time frame available for them? If not, what would be realistic?

If the problem is expectations, more of the work lies on you, the leader. That's because you're responsible for setting clear expectations. In this scenario, the tricky parts of setting goals are being specific enough (S in SMART), and finding the right measurements to track progress (M). You might already have a performance expectation like "keep your tickets in Linear up to date," but this team member consistently leaves out critical information, or you need to chase them down on ticket status. In this case, you need to spell it out: what exact type of information should be in the tickets? When should it be added? And how will you measure if they are hitting this goal or not?

Expectation descriptions can change based on the seniority of the person. Typically, the more experienced the person, the less specific an expectation should be, because part of being more senior engineer is translating something vague into something very clear. But even for a senior engineer, a performance review is not the time to be vague. If they continue to underperform, you're setting both sides up for a very tense relationship where the team member did not feel supported or like they didn't know what was actually expected of them.

Setting goals for mid-level performers

Team members should be driving their own career development, and it's your job to match their goals up with the needs of your company. If you have a defined career ladder, you can leverage it to find the kinds of opportunities that will take your team members to the next level.

For those team members who are solidly “meeting expectations,” it’s important for you to be on the same page with them about their career goals. Are they looking to get promoted in a few cycles, or are they content to spend more time in their goal and really master it? Depending on the answer, you’ll take a different course of action.

For the promotion course, look at your career ladder for direction.

It’s also totally fine (and under-appreciated, in my opinion) to have team members who are looking to spend more time in their role and really master it. These people aren’t competing for limited opportunities in order to be promoted, so you can be more creative about helping them build both breadth and depth in their skill set.

You can check out my list of 20+ Goals for Software Engineers for some inspiration here.

But, it's still important to be realistic about what your company needs. You might have a team member who has been itching to speak at a conference and become more active in the open source community. But, due to the economic downturn, your company has suspended all non-essential travel. Creating a performance goal of speaking at a conference just doesn't make sense in this case. But, you can figure out an alternative that would align to business goals and constraints -- maybe that means a virtual meetup, or joining a SIG (special interest group) that meets monthly.

Setting goals for high performers

For your highest performers, it's very important that you are clear and transparent about how quickly they can expect to be promoted, and what consistent "exceeds expectations" can mean for their salary. Time in role is often a consideration for promotions, and credibility and influence become more important the more senior a person is. I've seen so many disappointed senior engineers who expect a promotion to staff engineer because they're doing great as a senior engineer, but don't have the social capital and exposure to be effective as a staff engineer.

A few ways to deal with goals for high performers:

  • Put them in front of a new audience. This could be delegating a presentation in a company-wide all-hands or executive meeting, appointing them as the delegate from your team in a cross-functional meeting, or having them lead an initiative with a partner team, like data engineering. This can also include creating opportunities for them to lead communication with marketing or sales for a particular rollout. All of these experiences give them more exposure to other people in the company, and increase their credibility and influence.

  • Give them scope and autonomy. Technical excellence isn’t the only component of a great engineer. Scoping projects, developing solutions, and experimentation help engineers build the right thing in addition to building the thing right. Especially in a product company, these skills are essential for an engineer eyeing up a promotion to staff or above. Find a project where a high performer can have full end-to-end ownership, and give them both the space and support to practice these skills.

  • Challenge them to come up with their own goals. Part of maturing in your role means taking on more responsibility for setting direction, both for the team and also for their own career.

Example Goals for Software Engineers

I've compiled a list of 20+ goals for software engineers if you and your team members need a bit of inspiration.

20+ Goals for Software Engineers

Read More
Laura Tacho Laura Tacho

Lightweight Performance Reviews: Process, Schedule, and Templates

Templates, examples, and a schedule for a lightweight performance review process.

If your team has been asking you for more performance feedback, but your company doesn’t have a performance review process, it can be difficult to know what’s important to include and what is a waste of time.

Here’s a lightweight, “just enough process” version of performance reviews that you can use as a starting point.

This is specifically for people who don’t have processes or tools available to them from their companies.

Components of a Lightweight Performance Review Process

  • Self review completed by the team member

  • Written review by a manager

  • Performance review conversation to review feedback and set goals

    • See sample agenda below

Intentionally missing: peer feedback, 360 feedback, feedback for managers, feedback for the company.

If you want to make this process even more lightweight, you can skip the self review.

Performance Review Process Timeline

This whole timeline is based on the timing of the performance review conversation.

  • 3 days before conversation: the team members send their completed self reviews to their manager.

  • 1 day before conversation: the manager sends their written feedback to the team member for review ahead of the conversation.

  • Conversation day: since everyone has read written feedback before this conversation, you can use the time to discuss broader feedback themes, goals, and development opportunities. All you need to do is show up and follow the sample agenda (in a section below).

    • Be clear with your team that they should come to the conversation prepared to discuss the feedback you've shared with them. That means they need to take time to read it beforehand.

    • The manager should be responsible for setting up the calendar invite and logistics for this conversation. I typically use a standing 1-1 meeting slot, if that slot is 60 minutes. Otherwise, schedule something else. It might be a good idea to cancel a regular 1-1 if your performance review falls in the same week (depending on the usual content of your 1-1s).

Evaluations and Ratings

  • Missed expectations

    • The team member didn't meet the expectations as outlined in their role expectations.

      This might look like:

      • Worked on solving problems that weren't critical to growing the business

      • Consistently missed growth goals/or project deadlines

      • Made decisions without thinking of data, our users, or impact on the business

      • Hardly ever gave anyone feedback

      • Negative, disrespectful or apathetic to others 

      • Didn't set development goals

      If team members receive a Missed Expectations rating, they work with their manager to come up with concrete changes to make in order to get your performance back on track.

    • I give this rating in the case where someone is dramatically underperforming in one area of their job, but also when they are just slightly underperforming across a number of job responsibilities.

  • Meets expectations

    • Team members with this rating are performing well, mastering their role, and having a great impact. Most people fall into this category, including high performers. My expectations are high, and meeting them is a great achievement. If team members Meet Expectations, they work with their manager to keep moving forward.

  • Exceeds expectations

    • Earning this rating is rare. These team members are going above and beyond and have met and exceeded all of the expectations outlined in their role expectations. They contribute in ways that go beyond the scope of their level and/or role.

    • Only 10% of people should fall into this category. If it's more than that, your expectations are too low, or you are underlevelling people. This can be difficult for people who have worked at companies where “exceeds expectations” is the default.

Communicating the Process to the Team

Like any process, things go sideways when people don’t know what to expect, and when to expect it. If you are rolling this out for the first time, you’ll need to give ample heads up to the team, so they have time to understand the timeline and ask any clarifying questions.

For these examples, I’m assuming that your team is aligned on the value of performance feedback already, but you all need some support in the mechanics of having performance conversations.

At least 4 weeks before performance reviews: share expectations and timeline for performance reviews

  • Typically, onboarding processes will include some information about performance reviews. After you roll this out for the first time, include it in your onboarding processes so people know what to expect from the start. It’s also not uncommon for this to come up in interview questions from prospective engineers. They want to know how their performance is being evaluated.

Make sure to share whatever evaluation or rating system you decide to use.

At least 2 weeks before performance review conversation: share expectations, template, and deadline for self reviews. Also, schedule the conversations. If it’s going to happen during a regular 1-1 slot, change the name of the calendar event to reflect that it’s a performance review conversation.

  • Your team members need to take time to complete the self review and send it to their manager at least 3 days before the performance conversation

Sample Performance Review Meeting Agenda (60 minutes)

The manager should be responsible for setting up the calendar invite and logistics for this conversation. I typically use a standing 1-1 meeting slot, if that slot is 60 minutes.

  • Manager kicks off conversation (5 min)

  • Manager covers high-level and important points about the performance review (10 minutes)

    • There's no need to read the review verbatim here. You've set the expectation that both you and the team member will read written feedback ahead of time.

  • Team member leads discussion/questions about performance evaluation (20 min - it's okay if this overlaps or weaves into the agenda item above)

  • Identify performance goals, led by team member (20 minutes)

  • Wrap up (5 min)

Other Resources

Read More
Laura Tacho Laura Tacho

Prepping for Performance Reviews

Painless and Productive Performance Reviews, Part 1

Performance reviews should be both painless and productive.

Let’s face it: there’s a lot of pain when it comes to performance reviews, and a lot of it is administrative overhead. Finishing all of the paperwork and written documentation can be a heavy lift.

There’s also a bit of pain when it comes to the conversations themselves, especially when you’re going to share a performance result that you know the team member won’t be happy with.

The key to making both of those situations easier to handle is preparing, and starting those preparations early enough. Each company has their own performance review processes (and some don’t have any process at all!), but the dream state here is that as a manager, you don’t have to spend a ton of time preparing the paperwork, and for your team members, the conversations don’t have any surprises, even if you’re delivering bad news.

The Rule of No Surprises

Nothing in a performance review should be new information.

If you're practicing active performance management, you should take opportunities to give feedback when they happen. This will lead to your performance review just being a recap of feedback that you've already given, and that's what we're aiming for.

The Rule of No Surprises also applies to the performance review conversation itself. Some new information that comes up in a performance review might be a numerical rating about performance, information about salary changes, or a new job title. With very few exceptions, I also recommend sharing this information in writing just ahead of the conversation. This allows the team member to react to the news in private, collect their thoughts, and come into the conversation ready to discuss it.

Of course, giving people great news in person -- like a promotion they've worked hard for -- is one of the best parts of being a manager. You know your team best, so use your judgement here. If someone is expecting a promotion but they are not getting one, you likely want to give them time to process the news privately before heading into a conversation.

Surprisingly, sharing news about a salary increase in person might not be the great experience for the team member that you'd hope it would be. They might be disappointed about the number, and they're left to deal with that disappointment live, on camera. It's awkward.

For this reason, I recommend doing these things to comply with the Rule of No Surprises:

  • Share a written recap of your feedback with your team member beforehand, including any numerical ratings or "official" performance review documents.

  • Share salary updates in writing before your performance review conversations if you can.

  • Share disappointing news in writing beforehand, using your discretion.

Brag Sheets, Shared 1-1 Notes, and Other Tricks

Prepping for performance reviews can be a huge pain. We all love to think that we'll prep incrementally over the course of the quarter, but it's hard to keep up. Here are things that have actually worked to stay ahead of the curve when it comes to prepping for performance reviews.

Even if review season starts in 4 weeks, you still have enough time to use some of these tricks.

Keep shared feedback in a 1-1 document

I haven't found a better way to stay prepared for perf reviews than keeping really great notes. Ideally, these notes should be in a shared document with your team members. This has a lot of benefits: 1) you get into the habit of giving feedback regularly and practicing active performance management; 2) you are following the Rule of No Surprises because everything is documented already; and 3) you can literally copy and paste feedback comments from your 1-1 doc into your review.

My template for a shared 1-1 document is super simple. You can find it here.

Make a brag sheet for your team

Here's something that feels like cheating: you can prepare for your own performance review and the performance reviews for your team members at the same time.

Since you're being evaluated on the performance of your team, you can absolutely use a brag sheet (list of accomplishments) for your team for both purposes.

This doesn't strictly need to be a document, but it can be. I really enjoy Lara Hogan's donut log. Whenever the team does something impressive, or just released a project exactly on time and without fanfare, have a small celebration (like a donut). Take a picture and mark down why you're celebrating. You'll have a nice record of all the great stuff your team did over the course of the quarter.

Things I also add to the brag sheet: positive feedback that I hear about my team from other people.

15 minutes at the end of every month

Try this if it's realistic for you. Of all of these tips, this is the one that's easiest to skip because there's really no habit around it -- you just have to be disciplined about it.

At the end of each month, I set aside 15 or 20 minutes to mark down notes of any big-picture performance trends or possible performance goals that might not be immediately obvious from the tactical day-to-day notes that I take elsewhere.

15 to 20 minutes isn't really enough time for reflection and documentation, so I usually work in reflection another time. I've personally just found that it's easy to skip a 1-hour long time block (and use the time for all of the other things I need to do), but I can stay committed to just taking 15 minutes to mark down some notes and check off a to-do list item at the end of the month.

"Shitty first draft" method

My biggest trick for painless performance review preparation is to just start. We can get inside our own heads and slow progress because we want them to be perfect. So I start with a "shitty first draft" (SFD). The deal with an SFD is that you expect it to be crappy. That's the point. So write out the roughest, crappiest version of your performance review, then come back to it in a day or two. You'll realise that your SFD isn't as crappy as you thought, and you'll be a lot further along than you would have been if you waited for perfect inspiration to strike.

What About Self-Reviews?

I always include a self-reviews as part of performance review processes when I can, and I consider it part of preparing for a successful review.

Some reasons I recommend them:

  • First, a selfish reason. They are a great calibration tool. Self-reviews help you understand if you're doing your job when it comes to giving feedback actively. If a team member writes a glowing performance review of themselves and you're just about to tell them that they're not meeting expectations, something is wrong, and it's up to you to figure it out.

  • They give your team members a chance to share their voice in the performance review process, which can often feel like a powerless process where they don't have a lot of input.

  • They make the team member take an active role in the performance review process instead of just waiting for your feedback.

On the flipside, some reasons they aren't helpful:

  • They can be a lot of work for the team member if they also don't incrementally prepare over the course of the evaluation period.

  • Self-reviews can come across as "performance review theater" if they're not really considered in the feedback, or if they're never discussed.

When making your decision about whether or not to use them (if it's up to you) ask yourself whether the self-review will be used in the process, or if it's just a step that you want your team member to go through for purposes of self-reflection.

And if you already use self-reviews but haven't been framing them as an opportunity for your team member to have an active voice in their performance review process, now is a good chance to start.

Read More
Laura Tacho Laura Tacho

How Useful Are Free Salary Benchmarking Reports?

Why I don’t trust free salary benchmarking data, and where I look instead.

Last week, a salary data report from Hired.com was making the rounds on some Slack communities I’m in, as well as on Twitter. The timing was apt, because the same week, a few of my coaching clients brought up questions about salary benchmarking data, and how they could design a salary scheme as their companies continued to mature.

At first glance, the report from Hired looked useful. It confirmed some big trends that I’ve witness firsthand, such as engineers in the UK generally earning less than engineers in the USA, and that salaries are still on the rise, despite a cooling hiring market.

But these two points of confirmation weren’t enough for me to assume credibility for the rest of the report. I always approach free reports with a lot of skepticism, and you should too. Here’s why.

You’re the product

This report, and dozens like it, are a marketing tool. Sure, it’s a marketing tool that does provide some value for you.

Engineering leaders are notoriously a logic-driven group of people, and I am a skeptic at heart. I question the validity of everything, from a charity soliciting donations door-to-door to the nutrition score on my box of crackers (does it benefit the government to trick me into eating more wheat?!). Oddly though, as soon as something is both free and useful, many of us suspend that skepticism because there’s a job to be done, and this resource is helping us do it faster.

My rule when using free benchmarking data is the same as when I’m using a free service or tool:

If you’re not paying for anything, you’re the product.

The first question to ask yourself when looking at free benchmarking data — or any free reports published by a for-profit company — is “what are they selling?” In the case of Hired, they sell hiring and talent acquisition services, but also quite a handful of features like “salary bias alert,” which will tell you when you’re making an offer that’s out of range. It directly benefits Hired for you to look at this benchmarking data, think to yourself, “oh man, I think our salaries might be out of calibration with the market.” This makes you a more qualified lead for them.

Other questions to ask:

  • Am I their target customer? If so, there’s added incentive for the company to present the data in a way that makes you feel visceral pain.

  • How could my demographic data or questions benefit their product? Free product tiers or free whitepapers can be used to validate product ideas, or collect more data on a persona or demographic.

  • Where is the company located? In the case of Hired, they’re headquartered in the USA. Know that the data you’re getting is going to be skewed with that perspective. In this report, the data is coming from the US, Canada, and Western Europe, but presented as “worldwide.”

Understand your sphere of competition

If you are using salary benchmarking data to determine your own salary bands, or to verify that your offers are competitive, it’s imperative that you understand whose salaries are being reported, and if they are representative of the candidate pool for your open roles. Related to the last question above, “where is the company located?”, knowing the source and potential biases or omissions of your data is important.

Let’s see how this can play out:

Company A is a remote-first, internationally distributed startup. At Series C, they’ve gained some significant momentum both as a top-tier engineering brand, but also as a company with a high valuation and potential huge upside for equity. Because of this, they are competing on a global scale for talent. Specifically, a candidate in their hiring funnel can easily get an offer at another top tier company, including ones that have salaries normalised around San Francisco rates.

Company B is headquartered in Cleveland, Ohio. They are a publicly traded healthcare company, and they’re looking to expand their global offices by opening a satellite engineering office outside of London. Though most teams are flexible, employees are required to be in the office some of the time, so they hire only people who are local to the office, or who agree to relocate.

Company A and Company B are not competing for the same talent, and looking at two such stark examples can show how free salary benchmarking data isn’t always specific enough to know if you’re looking at salaries of people who are competitive on a global scale, on a local scale, or something else.

Is currency conversion important?

If it’s not obvious, I’m not an economist. I can’t speak to the textbook correctness and/or economic implications of comparing salaries in Europe to the USA with both salaries converted to USD, or if it’s reasonable to look at the base salary in their respective currencies. But, I can point out that it’s something you need to watch out for.

Check to see how your benchmarking data is reporting this, and how useful it is for your business. If your business is headquartered in the UK and you’re looking to hire engineers in the USA, it’s obviously going to be very useful to look at US salaries converted to GBP for your bottom line. But if you’re looking to understand what a competitive offer is in Berlin, knowing the USD equivalent isn’t as useful. The market rate for a job isn’t completely dependent on the exchange rate to USD.

My take on this topic is likely a bit different from other people’s, given that worked for about a decade in the USA and then moved to Europe in 2015. Unsurprisingly, my mortgage and car payments do not change based on the value of the Dollar against the Euro. For this reason, I tend to grit my teeth a bit when hearing the argument that “well, an engineer in the UK is making 85k GBP but that’s 95k USD.” Factually true, but the cost of living isn’t necessarily tied to the value of GBP against the Dollar (though it’s been pointed out that inflation is driven by global economics, which does influence cost of living). For a long time (and arguably still today), 1 EUR and 1 USD had the same buying power in local markets. Specifically, an iPhone was 899 USD or 899 EUR. My rent for a similarly-nice apartment in Berlin was 1500 EUR a month, about the same as what I paid for my condo in Chicago. But, things like out-of-pocket healthcare expenses do vary extensively from country to country.

Oddly though, I’ve never seen a company adjust base salary in the employees favour based on changing global exchange rates. However, it is interesting to mention that some companies — including one I formerly worked at — do give employees the option to select which currency they want to be paid in. I had some team members in countries with very volatile currencies elect to be paid in USD.

I asked Deel and Remote if they also noticed this trend:

In short, know how currency plays a role in the questions you’re trying to answer with benchmarking data.

Where I do look for trustworthy salary benchmarking data

I’ve spent a lot of time pointing out reasons not to trust the data, so to round it out, here are some places I do look to for trustworthy data around salary benchmarking.

  • Paid sources. Again, if you use free stuff, you’re using advertising. Many larger companies use a tool like the Radford Compensation Database for salary benchmarking. There are a handful of other tools like Figures that are positioned more for startups and small to medium businesses.

  • Published salary calculators. These only give you insight into one company, but the depth that they offer is valuable. GitLab’s famously used to be public but now it’s available only to employees, but Buffer’s is still out there. Codacy has a thorough writeup of how they’ve structured their calculator, as does Whereby.

  • Self-reported data. Glassdoor, Levels.fyi, and others are all good ways to get some data points about salary, but they never paint the whole picture, so keep that in mind. I really like the approach taken by TechPays, because it’s easier to differentiate whether the salary is for a global company, local company, which level, and the balance of cash vs. equity vs bonuses (Levels.fyi does a good job here to). These can never offer the depth and breadth of either paid benchmarking data or salary calculators from individual companies, but I do find them helpful to round out a picture.

  • Gem. Gem is a talent engagement and insights company, so a lot of the rules about "using advertising” and “you’re the product” definitely still apply here. But, I’ve followed the reports that Gem uses for years, and I have found them to stand out as unbiased, not using scare tactics, and transparent about what they are and what they aren’t. I’ve never felt tricked or that I was being sold to when referencing any of their materials, which I can’t say for other companies in the same space. They’ve got a bunch of reports that I’ve found useful when doing research, as well as a metrics calculator and lots of really solid blog posts about all things people.

Read More
Laura Tacho Laura Tacho

A Deep Dive into Developer Experience Surveys

Anonymous or not? How often should I send them? Who should take them? And what do I do with the results?

Developer experience describes the experience that developers have when developing software for your companies and teams. It’s a wide term with lots of components. Developer tooling is one part of it, but not everything. Processes, policies, architecture, security practices, documentation, habits, team culture, cross-functional relationships, and even laptops and hardware make up developer experience.

Discussions about developer experience (sometimes abbreviated as DX, like UX) have become prominent especially in the last few years, as companies try to maximise impact of their developer teams while also trying to prevent developer turnover. Exploring DX goes beyond productivity metrics, and helps you get a health check on the state of your teams. It can answer the question “do they feel like they can get their work done easily?” 

High developer experience correlates with other positive team traits such as high motivation, productivity, attrition, and overall well-being. Companies that have a great developer experience should expect to see teams quickly iterating without friction, more frequent use of best practices like refactoring, more experimentation, and high-impact outcomes.

On the other side, poor developer experience leads to lower motivation, higher attrition, lower perceived productivity, and poorer business outcomes.

DX has become so important to organisations that we’re starting to see whole teams dedicated to developer experience (outside of Big Corps who have had the resources to do this for years). Two great examples are Snyk and Stripe, who both have shared a fair bit about their experiences.

But, you don’t need a dedicated team to start investigating and improving developer experience. One common way that teams begin the conversation around developer experience is by putting out a developer experience survey (you might also see them called developer satisfaction or developer engagement surveys).

This article will go deep into a few key topics to consider when creating and analysing developer satisfaction surveys.

  • Techniques to measure developer experience

  • Anonymity and demographic data collection

  • Target audience and frequency of surveys

  • Tooling

  • My survey template and sample questions

  • What to do with the results

Measuring developer experience

A survey isn’t the only way to measure developer experience, but it’s a common way to get started. This article from Abi Noda breaks down the different ways that teams can get a pulse on developer experience and satisfaction. 

Office hours, 1-1s, retrospectives, or other team meetings can give you insight into how people are feeling.

For your own measurements, the decisions you need to make are a) how much variability can you tolerate in the answers while also getting a clear signal? And b) how important is tracking progress over time?

For these two reasons, surveys are an attractive option. You control the question set and response types, and you can easily send out the survey multiple times to track changes in the results.

But surveys don’t often leave room for open-ended answers, meaning you might miss out on something important. When putting together a survey, I first look at the qualitative, messy data from team interactions like retrospectives or office hours. From there, my team is already pointing me in a direction, and I will adapt questions based on what those conversations have already told me. Another technique you may way to try is using a standard set of questions at first, and letting those questions open up deeper conversations. I’ll share my own question set later in this article.

Anonymous or not? What about segmenting by demographics?

Like many other employee engagement surveys, I recommend that your developer surveys are anonymous. Anonymous surveys have many advantages, the primary one being that participation is generally higher. There are some drawbacks as well; specifically, that responses can be more extreme (both positive or negative) because there’s no attribution or accountability for how someone answers.

Since a developer experience survey is designed to capture emotions and sentiment, the drawback of more extreme answers isn’t a top concern for me. But, people opting out of the survey because they are afraid of retaliation would be counterproductive, which is why I make the recommendation to keep these anonymous.

Collecting demographic information might be appealing to you in order to segment responses based on things like tenure, years of experience, gender, and location. If team sizes are small, you may not be able to segment on demographic information in a way that preserves anonymity. Most engagement survey tools will only show aggregated metrics for teams with more than 3, sometimes 5, members. This ensures that answers likely can’t be attributed to individuals. With small numbers, it’s just too easy to tell who wrote something, which breaches the trust of the anonymous survey (like the anonymous surveys you filled out in grade school, not realising that your teacher knows everyone's handwriting!).

If you have a large enough team, collecting some demographic data in the survey might allow you to interpret the results more accurately. Keep in mind that your company may have specific guidelines about surveying employees that you’re obligated to follow. In any case, be transparent about what data is collected, if and how demographic data is collected and used, and how responses will be shared.

Picking the right tools

There’s a lot of flexibility when it comes to picking a tool to run the survey. If you’ve never run a survey before, I recommend first using Google Forms or something similar to roll your own survey. Once you’re confident the data is helpful to you and your team, you might consider buying a solution that will take care of the logistics for you. There are two options here:

  • Use your existing employee engagement survey tool, if you have one (Culture Amp, Lattice, Officevibe, etc). Many of these tools support custom surveys, so you can add your own developer experience questions and send out pulse surveys on a schedule.

  • Use a tool like DX, which is a surveying tool specifically for developer experience. DX will not just handle the logistics of the surveys, but also provides you with questions and benchmarks your data against the industry. 

If you don’t have an employee surveying tool yet, I wouldn’t buy a generic surveying tool just for the purpose of developer experience surveys. I’d go right to DX (or wait a few months until there are some competitors to DX on the market).

Target audience and survey frequency

You want enough time to pass to make changes based on previous responses, but also notice that you made changes. Every 8 weeks is a good cadence for surveys, but even going up to once per quarter is fine. Running the survey twice or year or less doesn’t provide enough data to notice meaningful trends, or track progress.

Beware of survey fatigue. Your team will become frustrated if the surveys are so frequent that they can’t see anything happening as a result of their participation, and engagement will drop off.

The name “developer experience survey” suggests that this is a survey for developers, which is mostly accurate. Any individual contributor within your engineering organisation is the target audience for these surveys. If you have specialised engineering functions such as data engineering, design engineering, or QA, make sure that the questions apply to them, or create a separate survey that’s tailored to their needs. 

Product managers, designers, or technical docs writers can participate in these surveys if they are a cross-functional partner in development. In the survey template I’ve shared below, the questions in Project Planning, Team Processes as well as Perceived Productivity categories aren’t specific to engineering only, and can be answered by other members of the product development team. Remember the point about demographic information above: if you only have 1 PM and 1 designer, it will be hard to maintain their anonymity if you choose this approach. But, if team processes like sprint ceremonies are decided by a cross-functional team, it is advisable to give all of those teams a chance to give feedback on the processes before deciding to make changes.

Survey Template and Sample Questions

I’ve collected my most frequently used questions in this doc. You can copy/paste these into a survey tool of your choice.

I designed this survey using the SPACE framework as guidance, along with my experience working with over 100 leaders on team performance. Questions are grouped into categories that are relevant for most engineering teams.

These are:

  • Efficiency, Focus Time

  • Production, Monitoring

  • Deployments and Releases

  • Collaboration and Knowledge

  • Codebase Health

  • Project Planning, Team Processes

  • Tradeoffs

  • Perceived Productivity

There’s also an Overall Satisfaction category that includes broad questions about satisfaction.
While the categories were rather straightforward based on research and my own experience, writing questions is harder than it appears on the surface. Just like designing software, the way you design a survey will influence how people use it.

Anything designed by a human being is going to have a bit of bias built into it. Some ways that bias shows up in surveys like this can be

  • Unequal weighting of categories

  • Complicated questions structures which can be confusing to people who don’t have English as a first language

  • Framing some categories with positive questions (“I’m satisfied with…”) and some with negative questions (“We don’t do enough…”) which can set a tone for the question and result in different responses, or confuse survey participants.

I’ve controlled for unequal weighting by targeting 3-4 questions in each category. Complicated English has been addressed by having a handful of non-native English speakers proofread the questions.

There are groups of questions that are coded negatively, meaning that the question is designed to get the respondent to agree to a negative statement. This is intentional, as a way to provide some checks and balances across all of the questions.

For example, “I feel like a lot of my time gets wasted.” This question balances out other statements like “Our team processes are efficient” and “Our meetings are effective and useful.” One interesting thing that you might notice in your own results is that your team might respond favourably to those two questions (giving the impression that meetings and processes are efficient) but then also agree that a lot of their time gets wasted. 

So, did your team agree that these processes and meetings were efficient because the question was framed positively, did they agree that time was wasted because the question was negative, or is there something other than meetings and processes (slow CI/CD?) that actually does waste their time? This is where you need to get curious, and talk about the results with your team.

Look through the questions yourself and spot the questions that go together (they’re often in different categories). This will help you when looking at your team’s results.

Discussing survey results and following up

Before you send out a developer experience survey, have a plan in place to use the results, and share it with your team. Being clear about what happens next will encourage people to participate, because they know they aren’t just throwing their opinions into a black hole. Plus, it helps you plan ahead to create space for both the conversations and follow-up actions that will come out of a survey like this. 

If you plan on putting out a survey just because you’re curious about the results but don’t plan on adjusting roadmaps to accommodate any new work that will come out of the survey: don’t. Your team will be frustrated, they’ll lose trust in you, and it’s a waste of time for everybody.

The results from this survey will serve as a benchmarking baseline for your team. From here, you can do two things: benchmark against yourself, or benchmark against the industry.

Benchmarking against yourselves is more accessible to most teams. With this technique, you’re looking at your team's results and tapping into the experiences of the team to figure out what interventions would have the biggest impact for them. 

Here’s what it looks like in practice: your survey goes out and results come back a week later. You share the results with your team either live or via video/slides. Then, you set up either a meeting or an async conversion for people to weigh in about what should be prioritised. You won’t be able to improve everything that comes back on the survey, but the results should point you in some direction.

For example, let’s say that your team had a very low score when it came to satisfaction with CI/CD: 3.5/10 on average. As a group, you come up with possible interventions and plan time for those projects. The next survey goes out 6 weeks later, and you’ve brought that score up to 6/10. Still a ways to go, but some good progress! (Note: If you need help here, I have a course on measuring team performance and offer private workshops on improving developer experience.)

Here are some sample results. In this case, it looks like you’ve got two groups: one that’s pretty happy with things, and one that feels pain. Just looking at average scores won’t give you that insight; you need to go a bit deeper into your analysis, and then get curious about why.

Benchmarking against the industry will give you more data points when it comes to making a decision about what to prioritise. It doesn’t replace the experiences and opinions of your team, but can help you contextualise your results based on how other teams are doing. The tricky part here is that you need to have access to industry benchmarking data, which can be tough. 

DORA metrics are a widely used set of benchmarking data, but they don’t often correlate 1-1 to the questions that typically appear on developer experience surveys. But, they can still help you better understand your results. For example, if the survey question “I’m satisfied with the speed and reliability of our CI/CD tooling” comes back with a very low score, and you see that your team is a “medium” performer according to DORA (find out here), you’ll have some clear indication on what to work on.

Generally, surveying tools (both generic tools as well as specialised tools like DX) will provide benchmarking data as part of the product. I’m not aware of a better place to get developer satisfaction and developer experience benchmarking data than DX right now, unless you want to sift through hundreds of academic papers yourself.

Part of the decision to include industry benchmarking data will be financial, and part of it depends on what outcomes you’re looking for. Again, industry benchmarking data does not replace the experiences and opinions of your team; it supplements them. I’ve worked with plenty of teams who do not have access to large amounts of industry benchmarking data and rely solely on their own data, and they’ve made some great progress. Other teams have chosen to get access to industry data and have used it to make prioritisation decisions.

When you send out the next survey in 8 or 12 weeks, a goal is that team members notice changes happening based on their responses, and that the scores improve. Include this question in all future surveys: “I've noticed changes based on the feedback from the last survey.”

TL;DR

  • Surveys are a common way to get a sense of developer experience and satisfaction on your teams.

  • These surveys should be anonymous, and go out every 8-12 weeks. Adjust this based on other surveys your team might participate in.

  • These surveys are designed for individual contributor developers (no managers), but including product, design, or other functions might be appropriate.

  • Don’t collect demographic information if your team size is small. It compromises anonymity.

  • The easiest way to get started is by creating your own survey using Google Forms. You can use my questions. Later, you might consider paying for a custom tool.

  • Make a plan to share and discuss the data, and inform your team about it before the survey goes out. This encourages them to participate, and gives them confidence that their responses will influence change.

  • Discuss the results with your team and make a plan. You might consider using industry benchmark data to supplement your team’s responses.

  • Send out follow up surveys.


Note: I mention quite a few tools in this article. I don’t do sponsored content, and I don’t receive any referral bonuses from any tools or companies that come up in this article.

Read More
Laura Tacho Laura Tacho

Competing for Talent as a Startup

How can your tiny startup compete with a Big Corp for developer talent?

As a startup, how can we compete for talent against big companies that can pay more?

Here’s something that’s just not true: BigCorps and startups are competing for the same developers.

It might be true in a few cases, but universally, it’s a myth.

That’s because not every developer is optimising for the same things when it comes to their job.

There are people who are really drawn to startups. They want autonomy, variety, and the financial upside (and risk) of equity at an early stage. Maybe they’re interested in starting their own company one day, or they have learned that they thrive on smaller teams or in early-stage companies.

There are also people who have set their sights on working at a BigCorp (FAANG and adjacent companies). They want the credibility that comes from that company name on their CV, the higher compensation, predictable raise and promotion schedules, and access to ample resources.

Depending on the person, the upsides that I listed here might be perceived as downsides. The structure of a BigCorp might turn off some people (despite higher comp) whereas the vision and mission of an early stage stage company might be meaningless without money in the bank.

Know your Employee Value Proposition

The set of unique qualities that will attract someone to your company is called your Employee Value Proposition. It answers the question “what’s in it for them?”

Gartner has broken down EVP into five main categories, summarised really nicely in this article from Gem:

  1. Opportunity: career advancement, training

  2. People: talented team, company culture, trust and transparency

  3. Organisation: company reputation, sector or market

  4. Work: interesting subject matter, work-life blend/balance

  5. Rewards: cash compensation, equity, benefits, bonuses

Touching on all of these in your reachout messages and job posts will send a strong message to your ideal candidates that this is the place for them. Similarly, people who aren’t a great fit will see the signals themselves.

Case Study: Vygo

Vygo is an edtech startup based in Australia, and I’ve been working with them for over a year. They are remote first, which means that they can cast a wide net for talent. But, being located in Australia can have some disadvantages when it comes to salaries and exchange rates. Australia has a startup culture, but it’s nowhere near what it is in London, Berlin, or in the USA.

When Joel, co-founder and CPO, and I started working together, his hiring approach was to look at people who were currently working at the Googles and Apples of the world, and to try to sell them on the vision.

We talked about what is good about this approach: high trust in Google’s hiring process, so if someone can get hired by Google, they’re probably talented; and also that these people likely have networks of other talented people. He needed people who could come in with high horsepower and just build what needed to be built.

We also evaluated the downsides. Vygo does not have Google money. While Vygo was prepared for competitive cash compensation, bonuses and RSUs were an area where they couldn’t compete. It could be tough to convince someone to take a reduction in overall comp. Also, people that thrive at Google have resources at their disposal. Not just in cash, but also systems and processes. It isn’t the same scrappy environment as a startup with 10 employees. At Vygo, there’s a lot of" “go figure it out.” For engineering specifically, this might look like wearing the hats of product manager and designer throughout projects.

One of Joel’s biggest fears was losing out on great talent because Vygo couldn’t compete on overall comp. But when we got deeper into the details of crafting an EVP, it became more clear that the kind of person who wants to work at Google for the name brand and high salary isn’t necessarily the same kind of person who would be drawn to Vygo, or be ready to thrive in Vygo’s mission-driven startup environment. Instead, we could focus recruiting efforts on people who are chasing after Vygo’s EVP.

Now, Vygo’s EVP is front and center in the job posting. Postings get little organic traffic; rather, they should be optimised for a quick glance when someone clicks on a link in a reachout message on LinkedIn or another platform.

Here’s what Vygo’s posting for a Senior Software Engineer looks like now:

Notice how the EVP components of organisation, rewards, and work are highlighted in the top bullet points, all above the fold. Then, there’s a whole section called “What’s in it for you” that further strengthens the EVP.

This job posting is a magnet. It will attract the right people, and repel others who won’t be suitable for Vygo’s size and stage.

After going through the exercise of defining and EVP and avoiding one-size-fits-all messaging, Vygo’s been able to make some fantastic hires of people who are hungry for the experience that an early-stage startup can bring.

Having an EVP isn’t a list of excuses not to pay people what they’re worth, or to feel okay with having far below market rate salaries because you have a “good company culture.” In Vygo’s case, it was a way to better represent the other upsides (financial or otherwise) of joining a company at their stage.

Read More