Why Does Scaling Automation Require Financial Rigor?

Why Does Scaling Automation Require Financial Rigor?

With a formidable background in applying financial discipline to technology initiatives, Greg Holmes has a unique perspective on a common, yet costly, problem: why so many promising intelligent automation projects fail to scale. As the Field CTO for EMEA at Apptio, an IBM company, he has seen firsthand how the initial excitement of a successful pilot can curdle into financial disappointment when faced with the realities of enterprise-wide deployment. Today, we delve into his philosophy of embedding financial rigor directly into the technology lifecycle. Our conversation will explore the critical shift from measuring saved labor hours to understanding true unit economics, the practicalities of integrating financial governance into development pipelines, and how frameworks like Technology Business Management (TBM) can finally bridge the long-standing divide between finance and technology leaders. We will also touch upon navigating the complexities of legacy systems and building budgets that foster sustainable, long-term innovation.

Many automation pilots show impressive results, like saving 100 hours a month, but then fail to scale. How can leaders move beyond simple labor metrics to track crucial unit economics, such as cost per transaction, right from the pilot phase? Please elaborate on the first few steps.

It’s a classic story, and I see it play out all the time. A team presents a pilot that saves 100 hours, and the boardroom erupts in applause. But this initial celebration is often a trap because it masks a flawed foundation. The first, most crucial step is to change the very definition of success right from the beginning. Instead of just asking, “How many hours did we save?” leaders need to ask, “What is the marginal cost to operate this at scale?” This means you must immediately start tracking unit economics—the cost per transaction, the cost per API call, the cost per customer served. The pilot environment is often a bubble; we see teams running on over-provisioned infrastructure because they want to guarantee performance, which makes the pilot look fantastic. But you would never deploy to production with that level of waste. The moment you move to a real-world environment, the financial calculus completely changes as API calls multiply and support overheads grow, and that’s where these projects die.

You advocate for shifting financial governance into development tools. How can an organization practically integrate cost estimation and policy enforcement into its infrastructure-as-code pipelines? What cultural shifts are required to make engineers embrace this financial accountability without slowing down their work?

This is about moving from a reactive to a proactive mindset. Instead of finance chasing down costs months after the fact, you embed cost visibility directly into the tools your engineers use every single day. Practically, this means integrating financial management platforms with infrastructure-as-code tools like HashiCorp Terraform or into the pull-request process in GitHub. When an engineer goes to spin up new resources, the pipeline can automatically generate a cost estimate right there before anything is deployed. You can even build in policies that flag or block deployments that are outside budget parameters. The cultural shift is about empowerment, not punishment. You’re not trying to turn engineers into accountants; you’re giving them the data to make better architectural decisions. It transforms the conversation from “Why did you spend so much?” to “How can we build this more efficiently?” It gets you out of that frustrating “whack-a-mole” game of fixing overspending after it happens and makes engineers true partners in value engineering, which most of them find far more rewarding.

Tension often exists between a CFO focused on ROI and a Head of Automation tracking operational metrics. How exactly does a framework like Technology Business Management (TBM) create a common language? Could you walk me through an example of translating technical inputs into a business-centric output?

That tension is absolutely real, and it’s because the two leaders are speaking completely different languages. The Head of Automation talks about process efficiency, while the CFO is looking at the P&L statement. TBM acts as the universal translator between them. Think of it as a standardized dictionary for IT costs. For example, the Head of Automation might report a successful project that involves new servers, more storage, and specific software licenses. To the CFO, that’s just a list of expenses. Using the TBM taxonomy, we map those technical resources—like compute and storage—into standardized IT towers. Then, we map those towers to the actual business capabilities they support, like “Customer Relationship Management” or “Online Sales Platform.” So, instead of a confusing technical bill, the business leader gets a clear statement showing them the total cost of their service consumption. They may not know what goes into all the IT layers, but they can see exactly how their consumption is driving costs and can make informed decisions about value.

When dealing with legacy systems, companies face a choice: use automation as a patch or as a bridge to modernization. What key factors in a Total Cost of Ownership (TCO) analysis help a leader decide when to maintain an old system versus when the cost of automation “wrappers” justifies replacement?

This is a critical decision point where a proper TCO analysis is indispensable. It forces you to look beyond the obvious. The key is to uncover all the hidden costs. You can’t just look at the maintenance contract for the old ERP system. You have to quantify the engineering time spent building and maintaining the automation “wrappers” needed to keep it functional in a modern ecosystem. How much labor is spent on manual workarounds? What is the infrastructure cost for all these extra layers? I remember the Commonwealth Bank of Australia did this brilliantly across 2,000 applications. Sometimes, the analysis shows a legacy system is incredibly valuable and worth maintaining. But in other cases, when you add up the TCO of all the automation layers, the band-aids, and the patches, you have this sudden moment of clarity. You realize the true cost of keeping that old system alive is enormous, and you’re just building up more technical debt by trying to mask inefficient processes instead of redesigning them.

Balancing flexible operational expenses with long-term financial commitments is a major challenge. What practical advice can you offer for building a budget that supports scalable automation? How do you decide when to commit to a platform for several years to achieve economies of scale?

The allure of pure OPEX and its flexibility can be deceptive; it often leads to wild fluctuations in spending that can derail a transformation strategy. The most effective approach is a hybrid one. You need tight, real-time management of your variable costs, using the FinOps principles we discussed to ensure engineering efficiency. But you must pair that with strategic, longer-term commitments. You decide when to make that multi-year commitment to a platform when you have a clear, long-term architectural vision. By standardizing on specific platforms, you not only gain significant economies of scale through better negotiation, but you also make it fundamentally easier for your teams to build the right things for the future. This long-term visibility gives you stability. It creates a financial foundation that is resilient enough to support true, scalable innovation without the constant sticker shock that kills so many ambitious projects.

What is your forecast for the intersection of FinOps and intelligent automation over the next five years?

Over the next five years, I believe we will see the line between FinOps and intelligent automation completely dissolve. They will no longer be two separate disciplines but a single, integrated function focused on autonomous value optimization. We’re moving beyond just giving developers cost visibility. The next evolution will be AI-driven systems that not only forecast cloud spend but also autonomously execute optimizations based on real-time performance and cost data. Imagine an automation platform that doesn’t just run a process but constantly refactors itself to run on the most cost-effective infrastructure, or AI that can predict budget overruns weeks in advance and suggest specific architectural changes to prevent them. This fusion will shift the role of technology leaders from managing costs to curating a portfolio of self-optimizing, value-generating digital assets, making the enterprise truly adaptive and financially resilient.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later