Welcome to WebmasterWorld Guest from 188.8.131.52
Total power used by servers represented about 0.6% of total U.S. electricity consumption in 2005. When cooling and auxiliary infrastructure are included, that number grows to 1.2%, an amount comparable to that for color televisions. The total power demand in 2005 (including associated infrastructure) is equivalent (in capacity terms) to about five 1000 MW power plants for the U.S. and 14 such plants for the world. The total electricity bill for operating those servers and associated infrastructure in 2005 was about $2.7 B and $7.2 B for the U.S. and the world, respectively.
Fascinating report by AMD (.pdf):-
At present the electricity given to usis stepped down to lower voltage levels and then used by our computers
Part of the problem is that the town the old facility is in cannot supply more electricity. This one really made me think.
In our new co-lo facility, each of our racks has a dedicated 40 amp circuit but we cannot fill them, even though all our servers have low-voltage CPUs. One of the main selling points of the provider was that the building is on an axis of two entirely independent power grids and gets redundant power from both.
My wife and I are rabid environmentalists. We use CF bulbs, drive hybrid cars (or ride our bike), turn the heat down, the whole bit. But meanwhile, our little installation, one of hundreds like it in our data center is drawing probably 10x the power our whole family uses in our house.
It makes one think. Massive amounts of electricity are used to power the servers, which generate heat. More massive amounts of power are used to cool the air, generating more heat. The heat is all pushed out into the atmosphere, even when the outdoor temperature is 10 degrees here in Boston.
Wouldn't it be nice if these data centers could find a way to turn some of that heat back into usable energy? Wouldn't it be nice if Intel continued to work to develop lower power-consuming CPUs? Wouldn't it be nice if server-class machines could go into standby mode?
This is an interesting problem indeed, and our industry is probably way, way down on the list of those dealing with this problem.
Air conditioners take the heat from inside the building and put it outside. They only generate more heat from the standpoint they are not 100% efficient.
Server farms are big business for electric utilities. Their load profiles are pretty good from the standpoint they use electricity around the clock.
From a reliability standpoint you'd want a data center to be fed from two circuits. Ideally each circuit would come from a different substation.
If one circuit gets physically cut (from digging, it does happen), then you're fed from the second circuit. And if one substation goes down (extremely rare) then you're still fed from the second circuit.
I have to say, that with the back up power supply systems of today, I wouldn't worry so much about the reliability of your power company.
Most of these machines are running 24/7 at very low average loads.
Now imagine a data center with 10,000 servers. It's 3 AM and only 500 of them are physically turned on, because load is very low at this hour. At 5PM there are 7,000 servers turned on.
Users don't care of know what server (or servers) they are using, as their applications hop seemlessly from server to server(s). They pay for bandwidth and computing power actually used, and when their needs grow don't have to install new equipment or migrate to a different server or servers.
If such a scheme were implemented widely, we could probably save 50% or more of this power cost.
Yes, today's servers reduce power usage at low loads, but only to a degree. There's still quite a power requirement just to have the computer sitting there doing nothing. Why not just turn it off completely when there isn't a need?
And while virtualization is used, we need to move beyond virtualization to elastic computing, where applications are not tied to a particular server. That way, each server can be loaded to an optimal level from a power-efficiency basis. Most VMs today probably have (at least) as low an average load level as most dedicated servers.
I wonder if there's also a case for running power into racks on 5v and 12v rails - eg, take the power supply duty outside of the datacentre (on the roof possibly, for better cooling)?
Actually, that is being done in some cases. There are problems distributing DC throughout a facility, though, due to the much higher current requirement at the lower voltages. So, a new alternative that is gaining momentum is rack-level power supplies. One power supply per rack, distributing DC through the rack.
Cisco has had this as an available option for routers for years, BTW. Better to have two redundant power supplies serving a rack full of routers than to have each router have a single non-redundant power supply.
Now, some server and blade manufacturers are taking the same approach. These sets generally mount the power supply on top of the rack, for better heat exhaust.
This is not a huge win power-wise, but it does make a difference (perhaps 10%). And it can save on equipment cost.
Does anyone know of a good resource for estimating power consumption for individual servers based on hardware configuration?
Simple solution.....Do away with color televisions, they are a huge waste of resources.....mainly consumed with Ads for nonsense products!
We should ban all commercial TV stations, only subscriber stations and PBS should remain!
It will be years before this actually happens......but, in the meantime join my limited revolt and simply refuse to buy any product that advertises on commercial stations!
If I see you advertise I will never purchase from you! If enough follow suit we will end up in a better world :)