Back
Strategy, Data, Cloud, Aug 01, 2023

Green IT: Optimising the stack from infrastructure to code

Jonathan Watson

Companies are increasingly feeling the pressure from their customers, regulators, employees, vendors, and investors to be more sustainable. This often starts with hiring a chief sustainability officer, but quickly becomes a key focus for the whole C-suite and forms a pillar of the firm’s environmental, social, and governance (ESG) goals. Since 2021, companies in the UK have needed to meet carbon reporting requirements and are required to regularly report on climate change matters. Many of the biggest firms have set themselves a goal to reach net-zero carbon emissions by 2050.

Fortunately, applying a green lens to everyday IT activities helps to push companies in the same direction as traditional optimisation and efficiency objectives. Optimising for resource usage saves not only time and money in the long run, but also reduces an organisation’s carbon footprint too.

IT-related consumption is forming an ever-increasing proportion of carbon emissions, data volumes are growing exponentially, and consumption of technology services is rapidly increasing. Use cases such as big data and artificial intelligence are really driving exponential growth. It’s more important than ever to ensure leaders keep a green lens focused on three main areas—IT infrastructure, software development, and the IT department.

IT infrastructure

As the old saying goes, you can’t fix what you don’t measure, and this holds true for the impact of your IT on the sustainability of your company. So, what can be done about it?

The first step is to look at your end-to-end processes in their entirety and work out your carbon footprint. Here, we’ll focus on just the technology aspects of this, but don’t forget your organisation will also need to look at other aspects too—whether that’s manufacturing facilities, offices, equipment such as laptops, or staff commuting habits. Vendors such as Apple and Acer are now competing on sustainability and use of recycled materials.

If you’re running on-premises infrastructure in your own data centres, a vital first step for understanding your costs is having good inventory and utilisation data. Then, third-party tools can be used to convert usage data into carbon equivalent figures. At Credera, we have successfully used Cloud Carbon Footprint (CCF) to help clients really understand their footprint by turning cloud billing data into carbon emissions data. CCF is just one product that helps turn infrastructure usage data directly into carbon values and helps you really understand your footprint.

CCF tool

An easy win here is also to look at any unused or underused equipment. Many organisations can accelerate their consolidation or decommissioning programmes dramatically by applying senior management focus and support to remediation programmes.

Other industry-standard infrastructure optimisation techniques also align with sustainability principles. A non-exhaustive list could include:

  • Good hardware lifecycle management: still holds, and each generation of processors has more processing capability for the same power budget or better power management features to throttle consumption. As such, it is important to ensure you have a sensible hardware refresh policy and turn off obsolete, power-hungry servers.

  • Looking to improve utilisation with consolidation or virtualisation programs: This could be achieved by creating a centralised container service or developing an internal cloud platform to allocate hardware resources more effectively.

  • Looking for always-on standby servers: Consider how many you really need and whether they must always be turned on. Re-think your disaster recovery strategy; for example, reuse the disaster recovery environment for extra testing capacity.

  • Reviewing your data centre efficiency: At a data centre (DC) level, is your DC running efficiently? A DC’s power usage efficiency (PUE) relates power consumed for running mission-critical compute vs. overall power consumed. A perfect DC has a PUE of 1, while modern hyperscalers like Google, AWS, and Azure data centres are already at PUE levels of around 1.1. A typical older, privately run DC can have PUE of 5 or more.

  • Considering renewable energy sources: Are you able to power your DCs using renewable energy? Many energy suppliers are offering green tariffs where they pledge to source their energy from renewable or low carbon sources, or at worst, offset the difference.

Of course, not every business can or should be running their own data centres. If your company isn’t in the infrastructure business, then you’ll probably be taking full advantage of the opportunities provided by the major cloud vendors. From a sustainability perspective, public cloud makes a lot of sense. Google Cloud has been using 100% renewable power for several years, even offsetting their non-renewable usage hourly according to their detailed whitepaper. Meanwhile Microsoft Azure and Amazon Web Services also plan to achieve this by 2025.

From a FinOps perspective, you can blend sustainability metrics with best practices by keeping tight control over your cloud usage, watching for environmental sprawl, and taking measures such as turning off virtual machines or cluster nodes. You should also look to automatically shut down development environments during quieter periods such as overnight or on weekends. Refactor your applications to take advantage of auto-scaling features so your cloud is the right size at all times. Various industry sources suggest that around 30% of cloud spending could be saved by implementing strong FinOps practices.

Software development

Software development practices can directly impact an organisation’s sustainability as well. Code optimisation has a cost in time and energy; traditionally, companies favour features and delivery over creating more efficient code in order to gain short-term market advantage over long-term efficiency goals.

While it’s difficult to measure carbon emissions associated with individual pieces of code, useful proxies can be CPU and memory utilisation, so targets should be set for developers for both. There are a number of considerations within the developer’s control, such as a web page’s time to load in a browser, that can form a useful proxy. The longer the time a web page takes to load, the more resources are being consumed by both the client and the back end, including CPU used to render the page and network bandwidth to download all those third-party libraries. Microsoft has an interesting short course of videos on the subject, which provides a handy introduction.

An interesting approach that we have seen used by some high-performance computing (HPC) organisations is to have two development teams. One team prototypes a new product in a rapid development language such as Python to get features released quickly, while the second team optimises the same code in something faster such as optimised Java or C++ and re-releases a month or so later. This carries a larger up-front cost but can result in lower repeating costs at execution time. For a large or computing intensive application, this can prove significant.

Regarding language choice, it can be surprising what a difference this can make. One study carried out a few years ago showed a dramatic range of resource consumption for standardised tests, with results ranging from the best compiled lower-level languages (unsurprisingly C/C++ / Rust) to worst being interpreted languages like Ruby, Python, and Perl. The latter can be over 70 times less efficient than the former for computing. Byte-code languages that have been heavily optimised such as Java do almost as well as the compiled languages. Compromises are possible of course—Python is very popular because of its relative ease of use but also because some key libraries for number crunching and data analysis are in fact wrapped C/C++ libraries.

You can also use high-level glue languages to join components together and optimise where it matters in the inner loop. For really large applications, hardware solutions like field programmable gate arrays (FPGAs)—where custom firmware can be programmed into a specialised hardware device), specialised application gateways to offload SSL overheads, or network cards with built in encryption capabilities might make more sense.

Data processing

With data volumes growing exponentially, the storage and movement of data is an increasing source of energy consumption and therefore sustainability concern. Along with storage capacity, analytics capabilities are always increasing, so it can be tempting to keep everything just in case and end up finding yourself left with a data swamp. So how should you approach this? Here are some key things to consider:

  • Think about what data your organisation is capturing. What is it for, and is it really necessary? How long do you need to keep it for? Is there a clear business reason?

  • Can you articulate back to the business the impact on cost and sustainability of retaining all that data?

  • If the data is in public cloud, are you making best use of the cloud providers’ lifecycle management tools?
    Long-term storage such as AWS’ Glacier, Azure’s Archive tier, and Google’s Archive storage are cheaper for the customer because they’re cheaper for the cloud provider to deploy and are therefore more sustainable.

  • Have you optimised your data analytics? Just like the example in the development section, long queries burn more resource, so a bit of work tweaking queries can save CO2.

  • Have you considered serverless deployments? This can be a good choice for some on-demand workloads as the infrastructure can scale to zero when unused.

Another quantifiable bonus of good data hygiene is the reduction of legal risk or the blast radius of a data breach. Good data management, which involves understanding what you have and how long you keep it for, will reduce your exposure to regulatory frameworks such as the General Data Protection Regulation (GDPR).

In a nutshell

We have looked at three ways that organisations can purposefully move toward a more sustainable position while still operating a modern IT operation. By starting at the bottom and reviewing infrastructure operations, optimising on-premises hardware, and making informed decisions on external service providers, it is possible to make real improvements to an organisation’s carbon footprint. This same thought process can also be applied to developing more efficient software—making the right choice of language for the task at hand and focusing on optimisation can result in long-term benefits.

All of these disciplines can help make a real contribution toward a company’s ESG goals and result in significant cost savings along the way.

Why Credera

With over 30 years of technology and transformation experience, we are a trusted partner for many global and local organisations. As a vendor-neutral consultancy, we will arm you with the right strategy and technical solution for your unique needs.

We have deep experience helping some of the world's leading organisations take tangible steps to re-evaluate their processes and tooling to support their journey to net zero. This can look like transforming manual data collection, turning sustainability data into actionable insights, or engineering a new product that supports your green strategy. To learn more, please get in touch with a member of our team.

Download our sustainability brochure
Read more:

A new era of carbon emissions accounting
Demystifying the alphabet soup of environmental sustainability reporting
Making sense of sustainability (Part two): How to unlock data-driven sustainability 
Making sense of sustainability (Part three): How green is your cloud?
Making sense of sustainability (Part four): Seven obstacles to sustainability and how to overcome them
Podcast: Should cloud be a part of your green strategy?
Credera UK sets out its Carbon Reduction Plan

Have a question?

FIND YOUR SOLUTION

Let’s find a solution that fits your challenge.

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book