Share:

I had the privilege to be on this FinOps webinar panel hosted by Tara Tapper, COO of Cloud at Eviden, an atos business  which included guest speakers Chris Hennesey, Strategist at AWS and Ian Brown, Director of Cloud Transformation at Keyloop. I’ve known of Chris for years, so it was great to get to know him personally for this panel, and I’ve had the privilege of working with Ian for the last 12 months. 

As a starting point, like we did on the Webinar, we should explain what FinOps is. This is something anyone in the space if very used to doing, and as a starting point we can use the FinOps Foundation’s definition:

“FinOps is an evolving cloud financial management discipline and cultural practice that enables organizations to get maximum business value by helping engineering, finance, technology and business teams to collaborate on data-driven spending decisions.” – as per FinOps Foundation Technical Advisory Council

When asked for my own definition, I try to keep it really simple:

FinOps helps organisations understand the value they are getting from the Cloud and make the right decisions to help improve it.

The Webinar itself had a fairly broad topic, Cloud Voyage: Navigating Success with FinOps at Every Stage, what really stood out for me, was how we all agreed that FinOps needs to start as early as possible.

To quote the O’Reilly FinOps book (Stormont, Fuller) and more specifically my friend Marit Hughes:

“Security is job 0, FinOps is job 0.5. Just as security is everyone’s job, so too is FinOps.”

It’s hard to put it much better than that, but Chris did a good job in his attempt.  When Tara asked “When should you start FinOps?”, Chris’s response was clear “Today.”  And Chris is right, the sooner you include FinOps in your cloud journey, the smoother it will be.  The cultural and process change that’s required needs a collaborative approach with finance, technology and the wider business community.  These changes are simpler if handled in smaller steps, which is easier to do earlier in the journey than later.  The panel also added that it’s important that people feel this change is done “with them, not to them” which is absolutely true, we are afterall dealing with people when it comes to cultural change.

Ian went on to talk us through how they already had some great conceptual practices around ownership and accountability from the data centre, but moving to AWS turbo-charged this. Allowing Keyloop to give their product owners true infrastructure costs.  While in the data centre this was often representative, now Keyloop can be specific and detailed in their analysis of individual workloads they bring to the business in terms of cost and value.

So what happens if an organisation hasn’t started with FinOps on day one? Well often they will have a lack of clarity and understanding of the costs incurred and how and why these change? It can bring a strain between finance and the technology teams and often lead to challenging conversations when bills are higher than they were previously, because the people paying them don’t know why. All of these can be handled with good FinOps practices and governance.

On the webinar Chris brought to the discussion how FinOps fits into the area of Governance, it is indeed like security and compliance in how it can be built into processes within the organisation.  This further resonates with Marit’s quote, the two are more linked than you may think.  One of the key challenges highlighted in the State of FinOps (FinOps Foundation) over the past 3 years, is getting engineers to take action.  This is something security/compliance has been battling for years, and they have processes with escalation paths for non-response/compliance, as well as exception routes.  FinOps should be exactly the same, when there is a gap between accountability and responsibility when it comes to cloud spend, process is a key way to help drive action.  This coupled with good governance around escalation and exceptions- which are always going to happen when recommendations come from a non-contextual logic engine- will result in a positive FinOps outcome for the business.

One of the great questions that was asked, which follows on from the earlier definition about understanding value, was how can we use business metrics to help define value from the cloud?  Well this is something known as unit economics in the FinOps community.  Where we use cloud usage, alongside business metrics/KPIs to help understand whether changes are positive or negative against that metric.  A simple example would be:

A company uses cloud services to host a software they sell on to customers.

The cloud service costs $1,000 per month.

The service supports 1,000 users.

The software is sold for $2 per user per month.

So from this we can say that the cost of the service is $1 per user per month. So keeping it very simple, we therefore can say the service is making $1 profit per user per month.  

Then an engineer applies a change to improve the service for customers. This reduces the cost by 20%, so now it’s $800 per month. Now we can see that if the amount of users remains the same, the price remains the same, the profit from that service has now gone up to $1.20 per customer (an increase of 20%). So the change has improved a business metric.

This is a very simple example and obviously these things can go both ways and often things like user experience can improve other metrics which may impact cost and value, but for now let’s keep it simple.  The key point is that depending on the service, aligning finance, the business and technology teams we can actually open up new levels of visibility to support good business decision making that drives real value.

To learn more about any of the concepts above, please watch the on-demand webinar here, where Ian and Chris bring it all to life with real world examples.