Welcome to WebmasterWorld Guest from 54.211.82.105

Forum Moderators: incrediBILL

Message Too Old, No Replies

Google Aims to Double Web Speed with SPDY Networking Protocol

     
8:02 pm on Nov 15, 2009 (gmt 0)

Senior Member

WebmasterWorld Senior Member tedster is a WebmasterWorld Top Contributor of All Time 10+ Year Member

joined:May 26, 2000
posts:37301
votes: 0


In a Chromium Blog [blog.chromium.org] article last week, Google shared information about SPDY - a research project that aims to make the web a LOT faster, and more secure at the same time.

One technical goal is to make SSL the underlying protocol for all browsing. This would improve security and still offer compatibility with existing network infrastructure. In order to overcome the added latency from SSL, the added SPDY layer reworks the way that concurrent interleaved streams flow over a single TCP connection.

Here's the white paper:
[sites.google.com...]

8:19 pm on Nov 15, 2009 (gmt 0)

Administrator from US 

WebmasterWorld Administrator incredibill is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Jan 25, 2005
posts:14650
votes: 94


Finally, something truly viable to speed up browsing unlike that failed "web accelerator" nonsense.

The only problem I see with SPDY is it'll hammer servers for resources faster than the current browser protocols so some servers already operating near capacity will be easily overloaded and need more hardware.

12:40 am on Nov 16, 2009 (gmt 0)

Full Member

10+ Year Member

joined:Aug 26, 2003
posts:311
votes: 0


The only problem I see with SPDY is it'll hammer servers for resources faster than the current browser protocols so some servers already operating near capacity will be easily overloaded and need more hardware.

And you think that is a bad thing? Might deter some of the MFA/Spammers/Cheapie affiliates from ruining web as we once knew it.

1:21 am on Nov 16, 2009 (gmt 0)

Preferred Member

10+ Year Member

joined:Mar 10, 2004
posts: 398
votes: 13


I don't think hardware resources will be an issue, given the stable nature of web protocols and their installment on literally millions of systems, I doubt we'll see any browsers without backwards compatibility with existing protocols for a long, long time.

Those with the extra capacity will adopt it (maybe), those without won't.

Also, depending on the nature of the resource useage sometimes things like this can actually decrease resources. For example, if the limiting factor is RAM and/or maximum number of available TCP connections, any protocol which transmits the page faster will free up connections quicker and reduce RAM usage.

A good example is gzip/deflate compression. Technically it uses more resources but I've had high load systems which saw a decrease. More connections available/idle during peak times and as a result, more RAM available for disk caching. Compression made these systems faster, especially since they had many pages/scripts which could easily swamp a small disk cache.

6:04 pm on Nov 16, 2009 (gmt 0)

Senior Member

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month

joined:Jan 30, 2006
posts:1669
votes: 10


do we really need fair open and honest google developing internet protocals which move INFORMATION around?
6:29 pm on Nov 16, 2009 (gmt 0)

Preferred Member

10+ Year Member

joined:Mar 10, 2004
posts:398
votes: 13


The protocol and specifications are public. Google is not forcing this in on anyone, and people are free to make their own implementation. Sometimes the mud-slinging for the sake of mud-slinging gets a little like wearing a tin foil hat.
6:46 pm on Nov 16, 2009 (gmt 0)

Senior Member from US 

WebmasterWorld Senior Member fotiman is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Oct 17, 2005
posts:4988
votes: 12


@J_RaD
I agree with mororhaven... mud-slinging for the sake of mud-slinging is not productive.
If company <fill-in-the-blank> is able to develop a better specification/standard than what is available today and that specification is open, then it shouldn't matter what company it is.
6:53 pm on Nov 16, 2009 (gmt 0)

Preferred Member

10+ Year Member

joined:Feb 25, 2003
posts: 418
votes: 0


And you think that is a bad thing? Might deter some of the MFA/Spammers/Cheapie affiliates from ruining web as we once knew it.

Wishful thinking.

6:59 pm on Nov 16, 2009 (gmt 0)

Preferred Member

10+ Year Member

joined:Mar 10, 2004
posts:398
votes: 13


Java
.Net
C#
rel="nofollow"
Ethernet
USB
PCI

All examples of things developed by private companies which have become universal standards. (Though non-Microsoft .Net implementations are still weak)

2:30 am on Nov 17, 2009 (gmt 0)

Senior Member

WebmasterWorld Senior Member 10+ Year Member

joined:Feb 20, 2002
posts:812
votes: 1


do we really need fair open and honest google developing internet protocals which move INFORMATION around?

Well, Google certainly do have sufficient research funding.

12:43 pm on Nov 18, 2009 (gmt 0)

Senior Member

WebmasterWorld Senior Member zeus is a WebmasterWorld Top Contributor of All Time 10+ Year Member

joined:Apr 28, 2002
posts:3444
votes: 1


Now Im no server dude, but will google have a hand in this new technical when its done, means like a google service or is it just something new where google dont have any rights to it, like open source.
7:20 pm on Nov 18, 2009 (gmt 0)

Preferred Member

5+ Year Member

joined:Nov 20, 2007
posts:585
votes: 0


gzip has been around for some years giving speeds that SPDY is claiming. HTML is so uncompressed, HTTP is so old, I'm sure we can achieve 200% speeds increases quite easily.

My only concern is that any new protocols should be kept really simply, so that small devices like phones can easily process these protocols.