|Datacentre Electricity Consumption|
In 2005 - $9.9bn Worldwide
| 1:54 pm on Feb 23, 2007 (gmt 0)|
|Total power used by servers represented about 0.6% of total U.S. electricity consumption in 2005. When cooling and auxiliary infrastructure are included, that number grows to 1.2%, an amount comparable to that for color televisions. The total power demand in 2005 (including associated infrastructure) is equivalent (in capacity terms) to about five 1000 MW power plants for the U.S. and 14 such plants for the world. The total electricity bill for operating those servers and associated infrastructure in 2005 was about $2.7 B and $7.2 B for the U.S. and the world, respectively. |
Fascinating report by AMD (.pdf):-
| 10:27 am on Feb 24, 2007 (gmt 0)|
I was listeing to speach by Larry page , he said he is in talks with electricity companies to give special suppy for computers and that would sav 10% of energy used by the computers
At present the electricity given to usis stepped down to lower voltage levels and then used by our computers
| 10:07 pm on Feb 24, 2007 (gmt 0)|
We're moving our servers to a new data center because the facility we're in now is in simply cannot supply enough electricity. Racks and cages designed in 1999 for 4U servers that draw 1/20th of the power of today's servers now have 4 or 5 1U servers in them. It's strange to walk around the building -- it looks like a ghost town.
Part of the problem is that the town the old facility is in cannot supply more electricity. This one really made me think.
In our new co-lo facility, each of our racks has a dedicated 40 amp circuit but we cannot fill them, even though all our servers have low-voltage CPUs. One of the main selling points of the provider was that the building is on an axis of two entirely independent power grids and gets redundant power from both.
My wife and I are rabid environmentalists. We use CF bulbs, drive hybrid cars (or ride our bike), turn the heat down, the whole bit. But meanwhile, our little installation, one of hundreds like it in our data center is drawing probably 10x the power our whole family uses in our house.
It makes one think. Massive amounts of electricity are used to power the servers, which generate heat. More massive amounts of power are used to cool the air, generating more heat. The heat is all pushed out into the atmosphere, even when the outdoor temperature is 10 degrees here in Boston.
Wouldn't it be nice if these data centers could find a way to turn some of that heat back into usable energy? Wouldn't it be nice if Intel continued to work to develop lower power-consuming CPUs? Wouldn't it be nice if server-class machines could go into standby mode?
This is an interesting problem indeed, and our industry is probably way, way down on the list of those dealing with this problem.
| 5:25 am on Feb 25, 2007 (gmt 0)|
my new dual core zeon server has a smaller power footprint the high ghz pentium 4 it replaced. So as newer computers are used for servers, the electrical use should go down. I too am concerned for the enviroment and I hope to see the performance/watt ratio to improve over time. I hope it at a massive datacenter in dallas and they have dual desiel backup generators and a huge bank of UPS systems and can guarentee 0 downtime. How cool it that!
| 9:13 pm on Feb 25, 2007 (gmt 0)|
>>More massive amounts of power are used to cool the air, generating more heat.
Air conditioners take the heat from inside the building and put it outside. They only generate more heat from the standpoint they are not 100% efficient.
Server farms are big business for electric utilities. Their load profiles are pretty good from the standpoint they use electricity around the clock.
From a reliability standpoint you'd want a data center to be fed from two circuits. Ideally each circuit would come from a different substation.
If one circuit gets physically cut (from digging, it does happen), then you're fed from the second circuit. And if one substation goes down (extremely rare) then you're still fed from the second circuit.
I have to say, that with the back up power supply systems of today, I wouldn't worry so much about the reliability of your power company.
| 9:31 pm on Feb 25, 2007 (gmt 0)|
The answer is virtualization, and beyond that, elastic computing.
Most of these machines are running 24/7 at very low average loads.
Now imagine a data center with 10,000 servers. It's 3 AM and only 500 of them are physically turned on, because load is very low at this hour. At 5PM there are 7,000 servers turned on.
Users don't care of know what server (or servers) they are using, as their applications hop seemlessly from server to server(s). They pay for bandwidth and computing power actually used, and when their needs grow don't have to install new equipment or migrate to a different server or servers.
If such a scheme were implemented widely, we could probably save 50% or more of this power cost.
Yes, today's servers reduce power usage at low loads, but only to a degree. There's still quite a power requirement just to have the computer sitting there doing nothing. Why not just turn it off completely when there isn't a need?
And while virtualization is used, we need to move beyond virtualization to elastic computing, where applications are not tied to a particular server. That way, each server can be loaded to an optimal level from a power-efficiency basis. Most VMs today probably have (at least) as low an average load level as most dedicated servers.
| 10:42 am on Feb 26, 2007 (gmt 0)|
Interesting idea jtara.
I wonder if there's also a case for running power into racks on 5v and 12v rails - eg, take the power supply duty outside of the datacentre (on the roof possibly, for better cooling)?
| 5:28 pm on Feb 26, 2007 (gmt 0)|
|I wonder if there's also a case for running power into racks on 5v and 12v rails - eg, take the power supply duty outside of the datacentre (on the roof possibly, for better cooling)? |
Actually, that is being done in some cases. There are problems distributing DC throughout a facility, though, due to the much higher current requirement at the lower voltages. So, a new alternative that is gaining momentum is rack-level power supplies. One power supply per rack, distributing DC through the rack.
Cisco has had this as an available option for routers for years, BTW. Better to have two redundant power supplies serving a rack full of routers than to have each router have a single non-redundant power supply.
Now, some server and blade manufacturers are taking the same approach. These sets generally mount the power supply on top of the rack, for better heat exhaust.
This is not a huge win power-wise, but it does make a difference (perhaps 10%). And it can save on equipment cost.
| 9:20 pm on Feb 26, 2007 (gmt 0)|
On the enviro side, there are a number of hosting companies claiming to be wind- or solar-powered. In most cases they're just buying "green" energy or carbon offsets from their utility, but I've heard of a least one that's generating its own power from a PV array on the datacenter premises (somewhere out in the desert, IIRC). Hopefully they have battery, generator, or grid-tied backup -- last time I checked we didn't have redundant suns in this solar system.
Does anyone know of a good resource for estimating power consumption for individual servers based on hardware configuration?
| 4:59 am on Feb 27, 2007 (gmt 0)|
>an amount comparable to that for color televisions.
Simple solution.....Do away with color televisions, they are a huge waste of resources.....mainly consumed with Ads for nonsense products!
We should ban all commercial TV stations, only subscriber stations and PBS should remain!
It will be years before this actually happens......but, in the meantime join my limited revolt and simply refuse to buy any product that advertises on commercial stations!
If I see you advertise I will never purchase from you! If enough follow suit we will end up in a better world :)