Saying that you can’t measure productivity is a pseudo-truism and a cop out of doing your job.
How do you measure productivity = how do you decide whom to promote; how do you decide whom to fire; how do you decide how to distribute bonuses; etc.
If you can’t measure productivity, you can’t do your job as an engineering manager. It’s not a question that should have been asked 3 months into a job. It’s a question that should have been asked during the hiring interview.
OP is an example of how AI-generated images are usually clutter. Not only do the images not add anything meaningful to the text, and arbitrary parts of the images could be deleted or randomized without affecting the reader's understanding, most of them could be randomly shuffled without anyone noticing. (Which makes them worse then clipart/stockart: if an article swapped the 'hacker hoodie' stockart with the 'neural net brain circuit' stockart, some readers would at least briefly be confused.)
I appreciate that they have a programming philosophy that they want people at the company to adopt. A common problem I see at companies that don't have onboarding is that people join the team with assumptions from previous jobs but you never level set them with the company. So 12 months down the line the new guy wants to change the process and you have to repeat the same discussions about what agile means for the nth time.
Amazon does a good job of training new hire on the 'Amazon way'. Amazon does 6 pagers, they do design docs. Amazon does SOA. Amazon does not use relational databases. Everything has an API. Because of the 'Amazon way' and the training they do new team members understand at least some of the context and expectations.
Is it the best way? Probably not but no one knows what the best way is anyway. At least they have a way. Saves a lot of effort compared to every new hire relitigating the process and architecture.
> Amazon does not use relational databases
Huh?
Relational databases are not the preferred storage mechanism at Amazon. If a team wants to use an OLTP relational database it’s possible that it will be a decision they will need to defend at any kind of design review.
Of course there are relational databases running OLTP workloads, but it’s far away from the norm. There was a program a while ago to shift many RDBMS systems onto something else.
Friends say they typically use Dynamo and that using a relational database requires approval from a vp (because of scaling concerns).
What an amazing article on "de-FAANGing" the perverse org/incentive structure of most startup/tech places. Would love to see more of this type of leadership in the real world.
I like how he says he doesn't need FAANG level people. Then his next paragraph describes working at FAANG.
"We’re an inverted organization. That means that tactical decisions are made by the people who are doing the work, not managers. (In theory, anyway, we’re not perfect.) So we’re looking for people who have peer leadership skills, who are great at teamwork, who will take ownership and make decisions on their own."
Exactly right. People who have “leadership” skills are the ones that pay attention to their own leadership and are manage up more than anything else.
They usually repackage people’s work around them into their own, take ownership and defend loudly their territory (project ownership) and methodically build relationships with leadership. Having “leadership skills” and being good a team work are often orthogonal to each other.
The recent book The NVIDIA Way is about that org’s culture that prevent FAANG incentives from creeping in to destroy productivity.
There’s a weird disconnect because on the one hand I agree you can’t measure productivity and on the other hand we all know that some engineers are vastly more productive than others. So what gives?
We all "know" that, but there are also some engineers that only give a very strong illusion of being more productive.
Maybe engineer #1 is constantly pushing up code. In the time it takes them to merge 15 PRs, engineer #2 opens only 1 - but maybe they thought really deeply about the problem, and their approach actually saves the team hundreds or thousands of hours of future development work vs how engineer #1 would have solved the problem.
Part of what makes this so hard to measure is the long tail effects of development decisions. (Incidentally, that's also a source of burnout for me - the constant mental overhead of worrying about the long term implications of what I'm doing, and particularly how they effect other people. It's very challenging.)
The engineers are only productive because they have the support structure in place.
The most productive fpga engineer I ever hired was so hopeless with git that I had to hire a second software engineer to babysit him.
After I left both of them got fired and the product they were ahead of schedule on when I left had slipped 2 years behind before it finally got cancelled three years later.
Those are two different concepts hiding in similar words. You can't [numerically or precisely] measure productivity, but some engineers are vastly more productive [such that you can easily tell the difference without a formal measurement].
Gut feeling uses all your internal predispositions and biases.
You can measure productivity with correlated metrics. The issue has always been that the metrics which are easy to track don't line up incentives with the actual business goals. A group of 10 people who write 200k loc per year are probably more productive than a group of 10 people who write 10k loc per year. If you took those metrics and then did an investigation of the people in your company writing 10k loc you might find that they are slackers or that they write assembly.
The issue is when metrics are used to stack rank teams with no thinking put into it. You can't treat correlated metrics like direct metrics. A logger might be evaluated based on how many trees he cut down in a day. There is no comparable way to pay software engineers piecemeal.
Metrics are good, but people want to use them without thinking or taking context into consideration.
Or you may find they write higher quality code - less bugs, more performant code, or so on.
It's an extremely complex mixture of many factors (which can vary wildly between two different productive engineers), and trying to make that into some magical formula ends up creating a system that can be gamed to superficially appear productive to managers.
You can measure productivity by measuring the success, but that's kinda useless for day to day software engineering management.
I tend to go by results, and for me, "results" means shipped* code that is used and accepted by end users**, can be maintained and extended***, and doesn't generate trouble tickets.
* MVP doesn't count.
** Can include users inside the organization.
*** It's OK if it requires senior-level ongoing support. I think expecting it to be maintained by monkeys is a bad idea.
To me, "MVP doesn't count" feels like a crazy take -- in many roles, the _only_ ask is to produce a series of different MVP's. I guess maybe the definition of "MVP" is a bit squishy, and these people-who-ship-MVPs themselves make MVP-MVP's, which shouldn't count as shipped?
I spent most of my career, shipping finished product, which, in many cases, probably could have benefitted from an MVP-like "tuning phase," but we called that "beta." I think MVP generates more useful feedback, but I really don't like thinking of an MVP as "shipping software."
I also worked for hardware companies, where shipping stuff had some pretty serious stakes, and learned how to make sure we got it as good as possible, before getting it out the door.
I like the idea of evolutionary design, and "tuning," but I think it's a bad idea (for me) to deliberately ship bad software as an end-product.
(Also, MVP, by definition, generates lots of trouble tickets. I am allergic to trouble tickets. It's totally a personal thing, but I live by it).
How do you define success? If a product bombs, is that because of the engineering or the product design?
I don't think it's possible to answer generally. Track what matters for your business.
a weird disconnect... of any true innovation or even reality... such vague objectional blandness...
"They'd beg to work for us" - what the f8ck.... if they were the best they would not beg anyone how degrading...They would be there for a mission or wanting to improve something about themseves or other parts of the world.
There's nothing here apart from Agile coach wanting to get some more work.
1984 was released in 1949, if anyone thinks these words / values really mean what is writen wow. People, Internal Quality, Lovability, Visibility, Agility, Profitability...
A great post, well worth reading. The principles in the section on 'people' are applicable to any organisation in any industry.
I especially liked the simple 'career ladder' example, for a) focussing on mostly on behaviour rather than knowledge, and b) for being simple to use and track progress with. (I've never seen anything like it in any of the large organisations I've worked in to date.)
> There’s more details here than I can explain today, but you can use the QR code to find a detailed article, including the documentation we use for the skills.
Why not just provide a clickable link given this is an article on the web?
> This is a transcript of my keynote presentation for the Regional Scrum Gathering Tokyo conference on January 8th, 2025.
Because the images are slides from a presentation that the audience could scan.
>Thank you for listening.
The text of the article appears to be the "talk-over."
+1 for Extreme Programming. I've been a fan from the beginning when Agile was all the rage and my recommendations for XP were met with blank stares.
I'm glad that it works for some people, but I did not like the forced pair programming in XP at all. And I found adherents to XP were even more cult like than Agile teams.
Does XP and pair programming actually require two people to be simultaneously working together at the same time? My understanding is that this includes one person who codes while another person looks at the results and reviews them afterward. The two are still working closely together and exchanging feedback, just at different points in the process in an iterative loop.
My understanding is that original meaning was that pair programming requires the pair work together at the same desk and machine.
Obviously, with the ability to share screens/IDEs remotely the same desk but may have shifted, but working together is intrinsic to pair programming I believe.
The original text went into some detail about making the desk work for 2 people, and having screwdrivers available to do so, which for some reason always amused me.
That was a breath of fresh air. Thank you James.
This was a really great read, lots of insight and things to think about.
But it's also depressing to see how good things could be and how poorly (IME) most orgs are run now. I know I've seen the exact 180-degree opposite of almost everything mentioned here: no team leadership or empowered people, no clear path to the next level for those interested, lack of communication, no emphasis on internal quality, overall pathological product choices (or lack thereof) and on and on. I'd kill to be part of an org that puts this much thought into everything.
This is a very nice post - not because the actual suggestions are good, but it demonstrates what a really technically sound VP looks like.
In most large tech companies, VP level people are so detached, delusional, and unskilled in engineering, that they end up undervaluing what engineers really do. They are unable to explain it beyond stack ranking them.
As an example, this post talks about how simplicity and maintenance brings value. But my VP literally fired people who did not produce new complex impact.
Just goes to show why so many people hate the big tech industry as employees. It is being run by charlatans who abscond from any real leadership.