This is very important information to help companies understand why it’s essential to create more eco-friendly data centers.
There’s no doubt that technologies such as smartphones have slowed down quite a bit over the past few years but data just continues growing in leaps and bounds. In 2012, there were 500,000 data centers around the world to handle global traffic. Now, that number has risen to over 8 million! The growth in smartphone usage, taking on IoT, and huge data analytics have contributed to the growth of data centers but there is a price to pay.
Each year, millions of these data centers are dumping tons of hardware, exhausting more electricity than can be imagined, and producing carbon emissions that can only be matched by the airlines.
It has always been a challenge trying to predict the growth in technology, but several resources predict that data center’s usage of energy could absorb more than 10% of the supply by the year 2030 if left to its own devices. This growth could very well add to gas emissions along with an increase of e-waste. Researchers, including Britain’s leading expert, Ian Bitterlin, said that the amount of energy used by these data centers is expected to double every four years.
Surveys by Informa were conducted with hundreds of IT leaders regarding their data center practices and the findings are incredible! Data centers are using approximately 3% of the world’s electrical supply while energy efficiency has ranked in fourth place in priorities when constructing or leasing new data centers. Most of the people who took the survey knew nothing about their data center’s PUE or Power Usage Effectiveness. They also had no idea regarding the primary measure of their data center’s efficiency and usually kept their data centers at unnecessary cold temperatures, wasting even more power.
This has led to an uncomfortable image of the environment’s future. Thanks to some industry leaders who have stepped back and taken a closer look have adopted innovative ways to address this conflict.
Over the past 5 years, the U.S. Department of Energy found the increase in Internet traffic and data loads were being addressed by a wide range of new technologies and designs, decreasing the data center’s consumption of energy.
The Lawrence Berkeley National Laboratory estimated that if 80% of the servers in the United States were shifted over to optimized facilities required in the distributed computing environments to effectively scale from just a few servers to thousands. This would result in a 25% drop in the use of energy.
For businesses that do not need or cannot afford a hyperspace data center, a new concept of resource-optimized systems for data centers have increased on the market. Over the past few years, new server technologies and data center engineering have been focusing on escalating resources and efficiency while limiting energy needs. These solutions are looking for new improvements in design and rethinking how standard data centers are being created to achieve advanced performance and efficiencies.
The development of superior cooling techniques has become one of the leading areas of improvement. One solution has been put forward to locate data centers in cold climates while another is allowing for fewer servers to be on and thereby not wasting idling time.
In 2014, Facebook developed a system called Autoscale which reduces the number of servers that need to be on during low-traffic hours. This has led to a savings of power by approximately 10 to 15%. Other companies, such as Google, have turned their attention to artificial intelligence, or AI, to improve their internal cooling systems by matching the weather and operation conditions which have reduced the usage of cooling energy by almost 40%.
Another avenue that has grown in popularity is designing a server system that can perform well at higher temperatures. Instead of cooling the system to a certain temperature, newer hardware can run at higher temperatures without impacting reliability. This, in turn, would require significantly less cooling which would require less electricity for the system.
Researchers are looking into other ways to make the use of power more efficient. A study performed by Control Up found that approximately 77% of the 140,000 servers they researched were overloaded with hardware, increasing the consumption of power while remaining active.
To solve this problem, combined resources can be integrated into the design. This would allow servers to share computing resources between systems and then be shared across many servers instead of being limited to individual devices.
Another thought would be to enable a modular sustainable infrastructure that will enable the upgrading of only missing elements of the system. By creating a server with independently upgradeable sub-systems, it would allow companies to be more selective and efficient in preserving hardware that does not have to be replaced. Intel has been redistributing disaggregated system designs with their newest generation of CPUs which has significantly contributed to the reduction of e-waste.
Toward The Future:
NASA’s Center for Environmental Research has been using solutions for data centers that seem to be in line with green computing. NASA’s Global Modeling and Assimilation Office’s Lesley Ort said they do not want to create a problem of greenhouse gas pollution while they are researching it. While organizations like NASA are making better improvements in research and addressing the environment regarding data centers. Many companies have not come to terms with the impact on the environment with their products and services.
At this point, the most important step is to educate companies on how important and beneficial more eco-friendly data centers will be. Technologies for solving the growing data center issues are ready and available. They offer a double advantage of optimizing performance while reducing environmental damage. Companies need to know that data centers do not have to harm the environment if they are willing to take the correct steps now.