
Data centre managers need to plan for a near-term future where the demand for IT and the cost of the electricity needed to power it are both rising.
IT workloads are continuing to grow, notably in areas like IoT and cognitive computing. According to some, the energy consumption of the world's data centres could triple, and with rising energy prices this will have financial as well as environmental consequences, Schneider Electric general manager for datacentres Andrew Kirker (pictured) told iTWire.
Some Schneider customers are seeing their electricity prices rising by 30% or more as their current contracts expire, he noted.
One way this can be addressed is by the adoption of more energy-efficient IT platforms, but that's outside Schneider's area of operations.
{loadposition stephen08}Where the company can help, Kirker said, is by improving the power usage effectiveness (PUE) of data centres. A PUE of 1.0 means all of the energy consumed by the data centre is being used by the IT equipment, with no overhead use for cooling, lighting, power conditioning and delivery, and so on.
A PUE of 1.2 is generally considered a good figure. Exactly which aspects of energy consumption are included or excluded can vary, so a direct comparison of the PUEs of two different data centres may not be particularly meaningful. Circumstances also vary: a data centre located in an area where the temperature never exceeds 25 degrees may only require ventilation rather than cooling, resulting in a lower PUE than would be achieved by an otherwise identical data centre in any Australian capital city.
Schneider is applying "innovation at every level" to improve PUE, Kirker said. Examples include the company's Galaxy UPSes and Ecoflair cooling systems that are energy-efficient and heavily digitised. Schneider has the best architecture to connect them, the best software, and applications that include predictive analytics to help manage and optimise energy use.
One of the challenges is to match the power demand of the IT equipment as closely as possible, which Schneider achieves by monitoring the actual power requirements and delivering accordingly.
This integration at the software layer also means that if the system detects a pending failure of power or cooling systems, it can trigger the transfer of workloads to hardware that will remain unaffected.
Kirker expects to see the extension of the software-defined data centre where workloads are moved automatically to optimise infrastructure use into the management of workloads between centres. This, for example, would make it possible to take advantage of differences in outside temperatures by moving workloads to cooler locations in order to benefit from the low cost of ambient air cooling.
"We will see more and more automation of that sort of thing," he told iTWire. "We are ready to go at the infrastructure layer."
On a related issue, Kirker predicts an upswing in the adoption of mini data centres as a result of the spread of IoT and the growth of latency-sensitive applications.
In this area, "prefabrication is really taking off," he said, with telecommunications, mining, and oil and gas being the leading sectors.
With the ability to select power and cooling equipment to suit the prevailing conditions, "it's all eminently doable", said Kirker.