I've never encountered cycle time recommended as a metric for evaluating individual developer productivity, making the central premise of this article rather misguided.
The primary value of measuring cycle time is precisely that it captures end-to-end process inefficiencies, variability, and bottlenecks, rather than individual effort. This systemic perspective is fundamental in Kanban methodology, where cycle time and its variance are commonly used to forecast delivery timelines.
> The primary value of measuring cycle time is precisely that it captures end-to-end process inefficiencies, variability, and bottlenecks, rather than individual effort
Yes! Waiting for responses from colleagues, slow CI pipelines, inefficient local dev processes, other teams constantly breaking things and affecting you, someone changing JIRA yet again, someone's calendar being full, stakeholders not available to clear up questions around requirements, poor internal documentation, spiraling testing complexity due to microservices etc. The list is endless
It's borderline cruel to take cycle time and measure and judge the developer alone.
> We analyze cycle time, a widely-used metric measuring time from ticket creation to completion, using a dataset of over 55,000 observations across 216 organizations. [...] We find precise but modest associations between cycle time and factors including coding days per week, number of merged pull requests, and degree of collaboration. However, these effects are set against considerable unexplained variation both between and within individuals.
Eh, starting the clock at ticket creation is likely less useful than starting when the ticket is moved to an in-progress state. Lots of reasons a ticket can sit in a backlog.
Cycle time is imprtant, but three problems with it. First, it (like many other factors) is just a proxy variable in the total cost equation. Second, cycle time is a lagging indicator so it gives you limited foresight into the systemic control levers at your disposal. And third, queue size plays a larger causal role in downstream economic problems with products. This is why you should always consider your queue size before your cycle time.
I didn't see these talked about much in the paper at a glance. Highly recommend Reinertsen's The Principles of Product Development Flow here instead.
How can something like " The Principles of Product Development Flow" be applied to software development when every item has a different size and quality than every other item?
The book has a chapter about how to optimize variability in the product development process. The key idea is that variability is not inherently good nor bad, we just care about the economic cost of variability. There are lots of asymmetries in the payoff functions in the software context, so the area is ripe for optimization, and that means sometimes you'll want to increase variability to increase profit. But if we're mostly concerned that software development is too variable, there are lots of ways to decrease it, like pooling demand-variable roles, cross-training employees on sequentially adjacent parts of the product lifecycle, implementing single high-capacity queues, etc.
My current org can have a cycle time on the order of a year. Embedded dev work on limited release cadence where the Jira (et. al.) workflow is sub-optimal and tickets don’t get reassigned, only tested, destroys metrics of this nature.
If this research is aimed at web-dev, sure I get it. I only read the intro. Software happens outside of webdev a lot, like a whole lot.
A thank you to HN who told me to multiply my estimates by Pi.
To be serious with the recipient, I actually multiply by 3.
What I can't understand is why my intuitive guess is always wrong. Even when I break down the parts, GUI is 3 hours, Algorithem is 20 hours, getting some important value is 5 hours... why does it end up taking 75 hours?
Sometimes I finish within ~1.5x my original intuitive time, but that is rare.
I even had a large project which I threw around the 3x number, not entirely being serious that it would take that long... and it did.
Because "GUI" and "Algorithm" are too big. You have to further decompose into small tasks which you can actually estimate. An estimable composition for a GUI task might be something like:
* Research scrollbar implementation options. (note, time box to x hours).
* Determine number of lines in document.
* Add scrollbar to primary pane.
* Determine number of lines presentable based on current window size.
* Determine number of lines in document currently visible.
* Hide scrollbar when number of displayed lines < document size.
* Verify behavior when we reach reach the end of the document.
* Verify behavior when we scroll to the top.
When you decompose a task it's also important to figure out which steps you don't understand well enough to estimate. The unpredictability of these steps are what blows your estimation, and the more of these are in your estimate, the less reliable your estimate will be.
If it's really important to produce an accurate estimate, then you have to figure out the details of these unknowns before you begin the project.
I've never encountered cycle time recommended as a metric for evaluating individual developer productivity, making the central premise of this article rather misguided.
The primary value of measuring cycle time is precisely that it captures end-to-end process inefficiencies, variability, and bottlenecks, rather than individual effort. This systemic perspective is fundamental in Kanban methodology, where cycle time and its variance are commonly used to forecast delivery timelines.
> The primary value of measuring cycle time is precisely that it captures end-to-end process inefficiencies, variability, and bottlenecks, rather than individual effort
Yes! Waiting for responses from colleagues, slow CI pipelines, inefficient local dev processes, other teams constantly breaking things and affecting you, someone changing JIRA yet again, someone's calendar being full, stakeholders not available to clear up questions around requirements, poor internal documentation, spiraling testing complexity due to microservices etc. The list is endless
It's borderline cruel to take cycle time and measure and judge the developer alone.
> We analyze cycle time, a widely-used metric measuring time from ticket creation to completion, using a dataset of over 55,000 observations across 216 organizations. [...] We find precise but modest associations between cycle time and factors including coding days per week, number of merged pull requests, and degree of collaboration. However, these effects are set against considerable unexplained variation both between and within individuals.
We also need to more info on what tasks are being performed and if they are of significance.
Eh, starting the clock at ticket creation is likely less useful than starting when the ticket is moved to an in-progress state. Lots of reasons a ticket can sit in a backlog.
Cycle time is imprtant, but three problems with it. First, it (like many other factors) is just a proxy variable in the total cost equation. Second, cycle time is a lagging indicator so it gives you limited foresight into the systemic control levers at your disposal. And third, queue size plays a larger causal role in downstream economic problems with products. This is why you should always consider your queue size before your cycle time.
I didn't see these talked about much in the paper at a glance. Highly recommend Reinertsen's The Principles of Product Development Flow here instead.
How can something like " The Principles of Product Development Flow" be applied to software development when every item has a different size and quality than every other item?
The book has a chapter about how to optimize variability in the product development process. The key idea is that variability is not inherently good nor bad, we just care about the economic cost of variability. There are lots of asymmetries in the payoff functions in the software context, so the area is ripe for optimization, and that means sometimes you'll want to increase variability to increase profit. But if we're mostly concerned that software development is too variable, there are lots of ways to decrease it, like pooling demand-variable roles, cross-training employees on sequentially adjacent parts of the product lifecycle, implementing single high-capacity queues, etc.
My current org can have a cycle time on the order of a year. Embedded dev work on limited release cadence where the Jira (et. al.) workflow is sub-optimal and tickets don’t get reassigned, only tested, destroys metrics of this nature.
If this research is aimed at web-dev, sure I get it. I only read the intro. Software happens outside of webdev a lot, like a whole lot.
A thank you to HN who told me to multiply my estimates by Pi.
To be serious with the recipient, I actually multiply by 3.
What I can't understand is why my intuitive guess is always wrong. Even when I break down the parts, GUI is 3 hours, Algorithem is 20 hours, getting some important value is 5 hours... why does it end up taking 75 hours?
Sometimes I finish within ~1.5x my original intuitive time, but that is rare.
I even had a large project which I threw around the 3x number, not entirely being serious that it would take that long... and it did.
Because "GUI" and "Algorithm" are too big. You have to further decompose into small tasks which you can actually estimate. An estimable composition for a GUI task might be something like:
* Research scrollbar implementation options. (note, time box to x hours).
* Determine number of lines in document.
* Add scrollbar to primary pane. * Determine number of lines presentable based on current window size.
* Determine number of lines in document currently visible.
* Hide scrollbar when number of displayed lines < document size.
* Verify behavior when we reach reach the end of the document.
* Verify behavior when we scroll to the top.
When you decompose a task it's also important to figure out which steps you don't understand well enough to estimate. The unpredictability of these steps are what blows your estimation, and the more of these are in your estimate, the less reliable your estimate will be.
If it's really important to produce an accurate estimate, then you have to figure out the details of these unknowns before you begin the project.
Hofstadter's law. :-)
https://en.wikipedia.org/wiki/Hofstadter%27s_law
[dead]