Expert Advice: Rethinking Electrical Distribution in Datacenters to Reduce Risk of Failure
In 1893, the company Westinghouse, took the undertaking of deploying the first electrical network in the United States, exploiting the alternating current system advocated by the scientist, Nicola Tesla. As amazing as it sounds, this method of distributing electricity has not evolved very much since this then. In more recent datacenter history, the same could be said as little has evolved in this field too. Service providers obviously ensure redundancy of certain elements (UPS, transformers) in the distribution chain, increasing the availability of their infrastructures. Even so, the general principal remains the same. Today representing between 1.5 to 2 percent of the world’s electrical consumption, how can we optimize power distribution in datacenters? What are the challenges? How can these changes reduce the risk of failure? Learn the answers to these questions in the following assessment made by Germain Masse, director of OVH Canada.
Economic Interest vs Reduction of "failure zone”
When we design a datacenter, we’re confronted with two types of logic. On the one side, there is a desire to segment power as much as possible, multiplying the number of “small” transformers and inverters, in order to minimize impact in the event of equipment failure. On the other side, this strategy adds to the installation and maintenance costs which ultimately costs the end customer.
Today, the balancing point is typically achieved with the deployment of transformers from 1 to 2 Megawatts, installed upstream of a UPS with a capacity of 400 kilowatts to 1 Megawatt (note that the current generation of inverters are often comprised of an assembly of modules). UPS are critical and have a large electrical capacity but force us to accept relatively large areas of failure, concerning between 3,000 to 6,000 machines (UPS are employed whenever a power failure occurs and it takes between 15 seconds to one minute to switch to the backup electrical supply and/or start the generators). This dilemma is a major issue for datacenters. OVH calls this area the “failure zone”. In the event of equipment malfunction, all affected servers are located in the same failure zone. Depending on the type of architecture constructed, some users will prefer to group servers in the same failure zone (for example when it’s imperative that multiple servers must always communicate with each other) while others will choose to disperse the machines between different failure zones (for example, required for a disaster/recovery plan). To meet such needs, OVH offers to locate servers in the datacenter of choice (Roubaix, Gravelines, or Strasbourg in Europe; Beauharnois in North America). In addition to offering the choice of datacenter location, we wish to offer even more precise choices to our customers by offering the choice of rack. This is a logistical challenge, but provides real value to users.
What if the Alternative was Direct Current?
Everywhere in the world, electrical current is distributed in AC (alternating current). In contrast, the elements that make up a server (processor, hard drive…) are powered by DC (direct current) using 3.3, 5, or 12 volts depending on the type of component. Electrical current arrives to the datacenter in the form of AC, then undergoes several transformations before supplying direct current to the components of a server. When you look closely, this is what happens in a data center in the US.
It's very logical to wish to eliminate the two phases of conversion -DC/AC and AC/DC- between UPS and server. Unfortunately, the field of electricity has not kept up with information technology. Let's go even further: why do we not deliver DC power directly to the datacenters? Based on the work of Thomas Edison, electricity was originally distributed by direct current. To reduce current loss during transport (true for DC and AC), it is necessary to lower its intensity by raising its voltage, due to the famous Joule effect. However, the change in voltage to DC caused a lot of energy loss. This is why power companies finally gave reason to Nikola Tesla and his alternating current system. Today the situation has changed and it’s possible, with the help of electronic components (better known as converters) to change the voltage of DC current for transport while limiting energy loss. Direct current is already preferred for very long distances or cases using sub terrain or underwater cables because only two cables are needed for transport. In addition to simplifying the distribution of electrical power, the interest in supplying datacenters with direct current is because of the possibility of being able to couple different energy sources together in order to increase voltage, intensity or redundancy. The recent surge in new power generation methods, such as photovoltaic or fuel cells, contributes to this development, though it is currently facing two setbacks. The first is economic. Today, the equipment associated with direct current has a higher cost. The second is psychological, ignorance and unwillingness to change on the part of the operators.
UPS in the Rack or in the Server: What is the Best Strategy?
Whatever strategy is chosen to move towards DC - AC/DC conversion at the point of datacenter power entry, at the level of the UPS, at the room, rack, or server – the server’s power supply must be replaced.
Microsoft recently disclosed as part of its contribution to the Open Compute Project, its plans for a server power supply which includes Li-ion batteries (1), an innovation that the company has named Local Energy Storage (LES). In reality, this concept is not new. In 2012 Supermicro announced that it would commercialize server power supplies with batteries (2). In 2009, Google revealed that it used this method in some of its machines (3).
The approach at OVH is original in the sense that the power supply in the server is replaced by a single stage DC/DC converter. Each rack contains a piece of equipment (code named Altigo) that converts the input AC current to DC (24 volts or 48 volts depending on the case).
This solution allows us to use the same batteries as standard UPS, which are mass produced. Today, this is more cost effective than installing a battery in each server. However, this situation could change, depending on the commercial success of this new type of power supply with an integrated battery.
(1) Microsoft Reinvents Datacenter Power Backup with New Open Compute Project Specification
(2) Supermicro® Servers Offer Industry-First N+N+N Battery Backup Power (BBP™) Module Technology
(3) "Google's big surprise: each server has its own 12-volt battery to supply power if there's a problem with the main source of electricity."