Welcome to WebmasterWorld Guest from 184.108.40.206
Forum Moderators: goodroi
Google Inc. will build a $600 million data center in western Iowa, Gov. Chet Culver announced Tuesday.
The online search leader has already begun construction on a 55-acre site in Council Bluffs, across the Missouri River from Omaha, Neb. The center is expected to create about 200 jobs with an average salary of about $50,000.
"Google's decision to make Iowa the home of their newest server farm will have a tremendous impact on Council Bluffs, western Iowa and our entire state," Culver said.
Google To Build $600 Million Datacenter in Iowa [businessweek.com]
In an effort aimed at attracting Google, Iowa lawmakers last session passed a bill that would exempt electricity and capital investment requirements from sales tax for computer-related businesses.
Most of their latest DC announcements come with heavy tax breaks and other incentives from local governments. There is no big mystery to unravel here, its about being cost-effective.
a datacenter in the middle of the states gives you a failover datacenter away from hurricanes, too much tornados (8 in 50 years), floods, earthquakes, etc. and the most important question for me would be: what are the bandwidth uplinks do they have?
Further I think the exploding bandwidth needs with online video and online applications will continue to grow in the next 15-20 years at an extremely high speed. From there you can serve the whole midwest, including every big city west of Cleveland and everything east of Utah. Think of QoS services... phone, video, applications, which need fast ping times and every mile on the cable counts in that case!
Even the datacenter in Zurich was the one of the smartest positions in europe I saw so far for a new setup (most would go to the Amsterdam hub for the low prices - not google, they went for the speed of huge empty fiber capacity, covering the whole continent with 2 very smart locations)...
whoever is planning the network over at google, knows exactly, what he wants and where the big pipes are with the shortest round trip times of data packages!
I am scared, when I am look too deep into it :-) ... Googles network seems to be optimized for speed without compromise!
wasn't the beauty of the internet the notion you didn't have to localize so much?
You don't *have* to, especially if you are a little guy.
The big guys almost all use services such as Akami - and/or their own internal networks - to move much of their content to the "edges" of the network. This started years ago with simple caching of web pages, but with so much on the web today being database-dependent, it now means deploying full servers to the edge.
I suppose one of the flaws of the web and Internet today is that it is only the transport mechanism that is decentralized. As well, the web doesn't have a defined store-and-forward model, when, really, much of it does lend itself to that. So, big sites and networks have layered-on proxies, transparent proxies, deployment of servers to the edge, etc.
Now, fast forward. Shouldn't as much computing as possible be deployed to the nearest/most economical node? (There is, of course, a tradeoff between the two - the nearest may not be the most economical.)
Imagine Amazon's S9, (elastic compute cloud) but with the servers geographically-dispersed (which may already be the case?).
The problems of standardization and fair payment for resource usage seem mind-boggling, however. From a technical standpoint, though, I think this is where things have to go. For a non-public network (say, DOD), though, I'll bet this is a lot closer than anybody thinks.
Anyway, back to today - this makes a lot of sense for Google. They already have datacenters near major population centers. Now they are adding one near the geographic center of the U.S. to pick up the rest and serve as a backup.
They've located in a place where there probably isn't already great connectivity because they can - they are Google!
Some of us (myself and WebmasterWorld, for example) see the wisdom of hosting in east Texas (Dallas/Ft. Worth to be specific). It's our Iowa, for now. Not quite so central, but has existing fat pipes. Keep an eye on Iowa, though, as Google will bring the connectivity that will bring others to host in the area.
The only thing i can think of is new services.. perhaps as YouTube blows up and they roll out other services they may start competing with local carrier services such as tv, telephone, wifi and thus locality would matter for best performance as they would probaly own the "last mile" where they can control the QoS/Routing and Management of such. (where it makes sense)
No way "edge" systems in there typical sense of "edge" systems would cost 600million and in the case of building what we call "edge" systems you would build them in or near high quality peering points to begin with.
As far as power goes, is Iowa really the mecha of cheap/clean power and from multiple power sources?
I may be biased because of my past involvement in the leading edge of reducing latency in stock trading. Physics is physics, there's no "may" about it. There's now a HUGE move toward colocation in the stock-trading industry. (i.e. colocation of broker/dealer servers directly at exchange data centers).
Today you could not do algorithmic trading successfully with a server in Los Angeles, or even Chicago. It was done in the past, but not today. You will need to be in the NYSE and NASDAQ data centers, or at least in NY/NJ with good fiber connectivity to the exchanges and ECNs.
It strikes me that hosted services have the same latency problem as stock trading, though perhaps not to the same degree.
Imagine a hard drive with a 100mSec latency. That's what you have, worst-case running a hosted app with a data center on one coast. Two coasts gets it down to 50 mSec max. Put one in the middle, and it's 25mSec.
Power is a HUGE problem - I have to assume they could not find the power near existing peering points, so they will have to create their own. Just read an article on slashdot about the NSA doing "rolling blackouts" within their data center (groups of servers having regular, scheduled outages), because they can't put their hands on sufficient power.