The Path #4
I’ve been in consulting long enough to have learned a few self-evident truths about the corporate world:
We could spend entire editions of this newsletter unpacking the first two - in fact, at Pathfindr we have our top people working on technology to prevent meetings from being scheduled after noon on Friday and we have a “talking on mute” jar that offenders must contribute to after each incident. It’s going to fund our EOY get-together.
But that third one is what we’re focusing on this week.
It’s hard to get a roomful of executives to definitively agree on something, particularly when it comes to things like savings from a particular tool implementation or transformation program. There are often too many stakeholders with competing agendas, too many sources of information that can contradict each other, and lots of ins, outs, and what-have-yous.
For example, I spent a lot of time early in my career managing teams tasked with automating test cases as a way of accelerating the SDLC (Software Development Lifecycle). We would dutifully review the manual test suite to identify the best test cases to automate, spend time automating them, and execute them in place of the manual tests to speed up our next regression run. In theory, we should be able to calculate a time saving - and, if you know the going hourly rate for a manual tester, a cost saving - by comparing the time it takes to execute the test manually against the automated run. Pretty simple, right?
One would think.
In reality, it can be incredibly difficult for the benefits from something like test automation to be felt (or even measured) concretely. There could be a wide range of reasons for this, but here are a few:
You can massage numbers until you’re blue in the face, but ultimately the real benefit from test automation needs to be felt at the stakeholder level. If you’re a Product Owner or Delivery Manager able to run tests faster, you can either A) execute more tests in the same time or with the same size team, to give yourself more confidence in the quality of the solution or B) execute the same number of tests with fewer people, so that they can work on something else. If you do either with no drop in quality, then you can KNOW you’ve achieved some kind of benefit. And if you look hard enough, you can find the data to quantify it.
These lessons that I learned the hard way many years ago have new relevance today, when many people seem to “know” (as opposed to KNOW) that LLM-powered applications are beneficial but hard numbers on exactly how are difficult to come by. Here are three different ways of thinking about quantifying benefits from AI.
As AI gets more user-friendly, and as teams’ workflows adjust to incorporate it more fully, it will get increasingly easier to quantify benefit because it will be integrated. You won’t need to pull data out of a different system to see that before you updated your processes to replace manual bottlenecks with AI solutions, your team produced X whereas now they produce Y. Until then, I encourage you to take a big-picture view of benefits, and don’t be afraid to test hypotheses, experiment, and hitch your wagon to numbers you believe in.