Cloud budget overruns don’t have a singular cause. Instead, they come in a bright rainbow of jelly belly flavours (the Bertie Botts ones, especially, will combine into a non-mouth-watering delight). Each needs different forms of response.
Ungoverned costs.
This is the black liquorice of FinOps problems. The organization has no idea what it’s spending, really, much less where the money is going, other than the big bills (or often, many little credit card bills) that they pay each month. This requires basic cost hygiene: analyse your cloud bills, get a cost management tool into place and make it useful through some tagging or partitioning discipline.
Unanticipated usage.
This is the sour watermelon flavour of cost overruns — deliciously sweet yet mouth-puckering. In this situation, the organization is the victim of its own cloud success. Cloud has been such a great thing for the organization that more and more unanticipated cloud projects are showing up, blowing out the original budget estimates for cloud resources.
Those cloud projects are delivering business value and it doesn’t make sense to say no to them (and even if central IT says no, the cloud costs can usually be paid for out of a line-of-business budget).
Nevertheless, it’s causing a lot of organizational angst because central IT or the sourcing team didn’t anticipate this spending. This organization needs to learn to shift its budgeting processes for the digital future, and cloud chargeback will help support future decision-making.
No commitments.
This is the minty wrongness of Bertie Botts toothpaste. The organization could get discounts by using public discounting mechanisms for commits (like AWS Savings Plans and Azure Reserved Instances) as well as making a contractual commitment for a negotiated discount.
But because the organization feels like they can’t perfectly predict their use and aren’t sure if they’ll use all of what they’re using today, they commit to nothing, therefore ensuring that they spend grotesquely more than they could be.
This is universally a terrible idea. Organizations that aren’t in the early pilot stage have long-term production applications and some predictability of usage; commit to the stuff you know you’re not killing off.
Dev/test waste.
This is the mundane bleah-ness of Bertie Botts earwax. Developers are provisioning the biggest things they can get away with (or at least being overaggressive in their estimates of what they need), there are lots of abandoned resources idling away, and dev/test infrastructure that isn’t used outside of business hours isn’t being suspended when unused.
This is what cloud cost management tools are great at doing — identifying obvious waste so that it can be eliminated, largely by shutting it down or suspending it, preferably via automation.
Too much production headroom.
This is the mild weirdness of the Bertie Botts grass flavour. Application teams haven’t implemented autoscaling for applications that can scale horizontally, or they’ve overestimated how much production headroom an application with variable usage needs (which may result in oversizing compute units or being overly aggressive with autoscaling).
This requires implementing autoscaling with some thoughtful tuning of parameters, and possibly a business value conversation on the cost/benefit trade-off of having higher application performance on a consistent basis.
Wrong sizing production.
This is the awful lingering terribleness of Bertie Botts vomit, whose taste you cannot get out of your mouth. Production environments are statically overprovisioned and therefore overly costly.
On-prem, 30% utilization is common, but it’s all CAPEX and as long as it’s within budget, no one really cares about the waste. But in the cloud, you pay for that excess resource monthly, forcing you to confront the ongoing cost of the waste.
However, anyone who tells you to “just” the right size has never actually tried to do this in practice within an enterprise. The problem is that applications that scale vertically typically can’t be easily rightsized.
It’s likely difficult to impossible to do automatically, due to complicated application installation. The application is fragile and may be mission-critical, so you are cautious about maintenance downtime. And the application team — the only people who really understand how this thing works — is likely busy with other priorities.
If this is your situation, your cloud cost management tool may cause you to cry hopeless tears, because you can see the waste but taking remediation actions is a complicated cross-functional war dance and delicate negotiation that leaves everyone wondering if it wouldn’t have been easier to just keep paying a larger bill.
Suboptimal design and implementation.
The controversial popcorn flavour. Architects are sometimes cost-oblivious when they design cloud solutions. They may make bad design choices, or changes in application features and behaviour over time may have turned out to make a design choice unexpectedly expensive.
Developers may write poorly performing code that consumes a lot of infrastructure resources, or code that makes excessive (and, cumulatively, expensive) calls to cloud services. Your cloud cost management tools are unlikely to be of any use for detecting these situations.
This needs to be addressed through performance engineering, with attention paid to the business value of the time/effort/money necessary to do so — and for many organizations may require bringing in third-party expertise to diagnose the problems and offer recommendations.
Notably, the answer to most of these issues is not “implement a cloud cost management tool”. The challenges aren’t as simple as a lot of vendors (and talking heads) make them out to be.
First published on Gartner Blog Network