How environmentally-conscious data centres are increasing network uptime

As the need for new data centres rises in lockstep with concerns for their environmental impact, engineers at a growing roster of companies are stretching their imaginations to entertain out-of-the-box strategies for saving power and keeping their data centres cool. As a result, new facilities are being built in increasingly exotic locales and with progressively innovative designs. At the same time, companies recognise the importance of making sure that these non-traditional data centres – which, by their nature, are often located at a distance from operational centres – remain remotely and reliably accessible in order to ensure uninterrupted monitoring, management, and ultimately, uptime.

The criteria used to determine where a data centre ought to be placed include factors such as the affordability of local electricity and land, the availability of local skilled labor (or implementation of remote administration technology where unmanned data centres make the most sense), and the ability to keep networking equipment cool despite the overwhelming heat load generated by constant operation. Data centre construction and design is also closely concerned with power usage effectiveness (PUE), which is the ratio of the total power used by a data centre to the power used by the IT equipment inside it. The ideal value for this ratio is 1.0 (or, more realistically, 1.1), but some less efficient data centers have PUEs over 2.5. Beyond efficiency, the environmental impact of a data centre’s power sources – which may not all be clean or renewable – is also a critical concern.

To address these factors, many of the latest data centre designs are taking advantage of clever methods for cooling equipment while mitigating negative environmental effects. For example, the Foxconn green-tunnel data centre located in the company’s eco-conscious Guiyang, China industrial park makes use of cold wind passing through its long tunnel design – as well as a carefully considered understanding of wind direction, temperatures, humidity and geology – to cool the facility with no need for extra air conditioning equipment. It also uses 30-35% less power usage than a comparable, traditionally-designed data centre.

Going a different route, Microsoft’s Project Natick is working to establish a facility beneath the Pacific Ocean, using the underwater environment to cool the infrastructure at a low energy cost, and creating an artificial reef that provides a home for underwater life that becomes its own ecosystem. Similarly, Google’s data centre near the Gulf of Finland uses that location’s frigid waters to cool servers, piping in gulf water that is converted into fresh water and back again to avoid damaging the networking equipment. This same care is taken at the hydroelectric-powered Facebook data centre in Lulea, Sweden, where servers are cooled by arctic air that is first treated with water vapor to ensure a humidity level that is safe for the equipment.

As important as it is to design these data centres so that they can reap the benefits of their unique environments safely, it’s just as critical to prepare these locations to be remotely accessible so that failures (whether due to equipment failure, human error or a host of other causes) can be easily diagnosed and repaired. If an outage occurs and customers cannot conduct business as usual, the fact that a facility is a cleverly designed wind tunnel or an eco-friendly structure on the ocean floor becomes far less appealing. In these cases, having remote access to IT infrastructure at distant data centres means reduced downtime, better business continuity, and saving the time and money it would take to send a technician to make repairs (or even bringing some of the components to the ocean’s surface, in one extreme example).

To maintain uptime by implementing resilient networking systems, businesses should put in place redundant and diversified methods for achieving remote connectivity, as well as robust out-of-band management capabilities. While the impact of a data centre upon the environment is a worthy long-term concern, the impact of the environment upon data centres is often a more acute one, as storms, flooding, fires and extreme temperature threaten to destroy equipment and connections, and the effects of global climate change only exacerbate these risks.

Out-of-band management strategies increase network resilience by offering access via a secondary connection method when primary connectivity goes offline – for example, if flooding damages a primary landline-based connection, the network will seamlessly switch over to a cellular-based connection that isn’t vulnerable to the same threats. Improved resilience also requires an awareness of environmental factors inside the facility. Equipping network hardware with environmental sensors makes it possible to automatically alert remotely located administrators if temperature, humidity, smoke, flooding or other issues arise in the data centre, giving them the information they need to protect systems and maintain network uptime.

The data centres of the future will undoubtedly continue be more environmentally conscious, both inside and out. Looking forward, we should expect these critical infrastructures to increasingly take advantage of non-traditional locales and designs to maximize efficiency and reduce environmental impact, while also better utilizing remote connectivity and out-of-band management to maintain an agreeable environment for the safe and reliable operation of IT assets.