homepage Welcome to WebmasterWorld Guest from
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Visit PubCon.com
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 713 message thread spans 24 pages: < < 713 ( 1 ... 7 8 9 10 11 12 13 14 15 16 [17] 18 19 20 21 22 23 24 > >     
302 Redirects continues to be an issue

 6:23 pm on Feb 27, 2005 (gmt 0)

recent related threads:

It is now 100% certain that any site can destroy low to midrange pagerank sites by causing googlebot to snap up a 302 redirect via scripts such as php, asp and cgi etc supported by an unseen randomly generated meta refresh page pointing to an unsuspecting site. The encroaching site in many cases actually write your websites location URL with a 302 redirect inside their server. This is flagrant violation of copyright and manipulation of search engine robots and geared to exploit and destroy websites and to artificially inflate ranking of the offending sites.

Many unethical webmasters and site owners are already creating thousands of TEMPLATED (ready to go) SKYSCRAPER sites fed by affiliate companies immense databases. These companies that have your website info within their databases feed your page snippets, without your permission, to vast numbers of the skyscraper sites. A carefully adjusted variant php based redirection script that causes a 302 redirect to your site, and included in the script an affiliate click checker, goes to work. What is very sneaky is the randomly generated meta refresh page that can only be detected via the use of a good header interrogation tool.

Googlebot and MSMBOT follow these php scripts to either an internal sub-domain containing the 302 redirect or serverside and “BANG” down goes your site if it has a pagerank below the offending site. Your index page is crippled because googlebot and msnbot now consider your home page at best a supplemental page of the offending site. The offending sites URL that contains your URL is indexed as belonging to the offending site. The offending site knows that google does not reveal all links pointing to your site, takes a couple of months to update, and thus an INURL:YOURSITE.COM will not be of much help to trace for a long time. Note that these scripts apply your URL mostly stripped or without the WWW. Making detection harder. This also causes googlebot to generate another URL listing for your site that can be seen as duplicate content. A 301 redirect resolves at least the short URL problem so aleviating google from deciding which of the two URL's of your site to index higher, more often the higher linked pagerank.

Your only hope is that your pagerank is higher than the offending site. This alone is no guarantee because the offending site would have targeted many higher pagerank sites within its system on the off chance that it strips at least one of the targets. This is further applied by hundreds of other hidden 301 permanent redirects to pagerank 7 or above sites, again in the hope of stripping a high pagerank site. This would then empower their scripts to highjack more efficiently. Sadly supposedly ethical big name affiliates are involved in this scam, they know it is going on and google adwords is probably the main target of revenue. Though I am sure only google do not approve of their adsense program to be used in such manner.

Many such offending sites have no e-mail contact and hidden WHOIS and no telephone number. Even if you were to contact them, you will find in most cases that the owner or webmaster cannot remove your links at their site because the feeds are by affiliate databases.

There is no point in contacting GOOGLE or MSN because this problem has been around for at least 9 months, only now it is escalating at an alarming rate. All pagerank sites of 5 or below are susceptible, if your site is 3 or 4 then be very alarmed. A skyscraper site only need create child page linking to get pagerank 4 or 5 without the need to strip other sites.

Caution, trying to exclude via robots text will not help because these scripts are nearly able to convert daily.

Trying to remove a link through google that looks like
new.searc**verywhere.co.uk/goto.php?path=yoursite.com%2F will result in your entire website being removed from google’s index for an indefinite period time, at least 90 days and you cannot get re-indexed within this timeline.

I am working on an automated 302 REBOUND SCRIPT to trace and counteract an offending site. This script will spider and detect all pages including sub-domains within an offending site and blast all of its pages, including dynamic pages with a 302 or 301 redirect. Hopefully it will detect the feeding database and blast it with as many 302 redirects as it contains URLS. So in essence a programme in perpetual motion creating millions of 302 redirects so long as it stays on. As every page is a unique URL, the script will hopefully continue to create and bombard a site that generates dynamically generated pages that possesses php, asp, cigi redirecting scripts. A SKYSCRAPER site that is fed can have its server totally occupied by a single efficient spider that continually requests pages in split seconds continually throughout the day and week.

If the repeatedly spidered site is depleted of its bandwidth, it may then be possible to remove it via googles URL removal tool. You only need a few seconds of 404 or a 403 regarding the offending site for google’s url console to detect what it needs. Either the site or the damaging link.

I hope I have been informative and to help anybody that has a hijacked site who’s natural revenue has been unfairly treated. Also note that your site may never gain its rank even after the removal of the offending links. Talking to offending site owners often result in their denial that they are causing problems and say that they are only counting outbound clicks. And they seam reluctant to remove your links....Yeah, pull the other one.

[edited by: Brett_Tabke at 9:49 pm (utc) on Mar. 16, 2005]



 12:00 pm on Mar 14, 2005 (gmt 0)

How about a robots.txt directive for if you want to use 302s

That would be great if you could convince the hijackers to do so.

Wouldn't it be ironic if you could publish sensitive data of another web site simply by bypassing the robots.txt with a 302..?!

Reality or fiction?


 12:58 pm on Mar 14, 2005 (gmt 0)

A robots.txt directive would work - I think I suggested it myself in a previous mega-thread (or perhaps I just thought it).

The trick is that the 302 is ignored unless accepted by a directive in the robots.txt file of the target domain.



 1:33 pm on Mar 14, 2005 (gmt 0)

It is very possible to find examples involving big name brand domains in the SERPs (where a page on the big name brand domain has the wrong URL listed). Just don't post them here, though, it would be against TOS (and remember that "the hijacker" probably don't even know about this, and certainly doesn't have malicious intent).

>> robots.txt

There's a problem here with vanity domains, parked domains and such. Anyway, the biggest problem is another: If you don't know about this you have implicitly said "yes", and afaik, the majority out there just don't know about this and never will.


 1:54 pm on Mar 14, 2005 (gmt 0)

The trick is that the 302 is ignored unless accepted by a directive in the robots.txt file of the target domain

kaled, please explain how this would work.
I presume it means Google would have to implement a change in the way their bot gathers information.

To me it looks like Google can't figure out where the target really is, at least in my case.
The redirect string is so long that it gets 'forgotten' and simply attributes the content of the target frame to the hijacker's dynamic page.

I posted an example of the script at the beginning of the "Lost in Google" thread which got edited out (in spite of the fact that I didn't mention any specific domains), I can sticky you the source code to show you what I mean.


 4:46 pm on Mar 14, 2005 (gmt 0)

Yes, Google would have to implement a change in their system. However, it allows for the legitimate use of 302 redirects, etc. Ultimately, all other solutions will require such redirects to be ignored entirely (by search engines) or ignored if they cross domains.
Treating redirects as simple links should be ok.

Personally, I can see no problem with indexing the urls that actually deliver the content - to hell with redirects. However, I appreciate others may disagree with this sentiment.



 5:26 pm on Mar 14, 2005 (gmt 0)

Maybe we should distinguish between cause, effect, and method more clearly.
Treating the method (302) as a blackbox for definition in a footnote, helps keep the focus.

*Google has a method to avoid duplicate content in serps, in itself a commendable objective.
*If Google has to choose between 2 apparent identical sites, it chooses the one with the highest PageRank and pushes the other "duplicate" site out of or way down the serps.
*PageRank is a Google method to determine highest relevance in serps.
*Webmasters well understand they should not submit duplicate content to Google, or face the consequences.

The problem:
Google's duplicate content filter can create an unfair choice of which of 2 duplicate sites is the most relevant.
Unfair happens when someone other than the webmaster, termed a hijacker, using 302 redirects to the webmaster's website, creates duplicate content in Google, which triggers Google to select the hijackers website and push down or out the webmaster's website from Google serps.

Problem explanation:
The terms "unfair" and "hijacker" are used above, because it is completely beyond the control of the webmaster, and Google's methods are allowing it to happen.

The legitimate 302 method used by hijackers works like this........bla bla......


 5:54 pm on Mar 14, 2005 (gmt 0)

*If Google has to choose between 2 apparent identical sites, it chooses the one with the highest PageRank and pushes the other "duplicate" site out of or way down the serps.
*PageRank is a Google method to determine highest relevance in serps.

A page of mine that was hijacked was/is a PR7! While the offending urls were PR 2 through PR6. The PageRank system is NOT without flaw, nor is it used to weed out duplicates (from my observations).



 6:18 pm on Mar 14, 2005 (gmt 0)


Either way, the point is the duplicate content filter "Lets the best page win", the hijacker strategy depends on its page winning.
If we could just understand fully how that duplicate filter works.......... but that's part of the blackbox......


 6:35 pm on Mar 14, 2005 (gmt 0)

Im on crobbs side here,it has absolute nothing to do with the PR, if a link(302) to you has been created and google has made it as a site(google bug) then the other site slowly looses its PR to 0 and the other sometimes get a higer PR - my site got its PR back for about a month or 2 ago, still NO changes in the googlejacker situation and the hijacker still has a good time.


 6:49 pm on Mar 14, 2005 (gmt 0)

i'm seeing a hijacked site back in google.com

and gone again.

[edited by: stargeek at 6:51 pm (utc) on Mar. 14, 2005]


 6:49 pm on Mar 14, 2005 (gmt 0)

Granted, we all know high PR is not necessarily highest rank in serps.
Whatever way Google uses to select the winner for duplicate content, that is where hijackers are successful.


 7:18 pm on Mar 14, 2005 (gmt 0)

hijacked websites, back in.

seems like an update or something is a brewing.


 7:23 pm on Mar 14, 2005 (gmt 0)

The only reliable way to determine which is the ligitimate page (or preferred page) of two duplicates is to compare indexation dates. If page a.html was first indexed in Jan 2001, it should be preferred to the page aa.html first indexed in March 2004, etc.

However, Google does not seems to store this information, and so is in all sorts of trouble as a result.

For the record, I am less than convinced that duplicate content algos have any relevance to Googlejacking.



 7:26 pm on Mar 14, 2005 (gmt 0)

the point is the duplicate content filter "Lets the best page win"

Again, you are incorrect. The "best page" should be the original author, always. Scraper directory sites and hijacking urls that come along and use my content should NOT outrank me.

I am not sure which side of the fence you are on. Sounds like you are arguing against the actions of the hijackers, yet you are claiming Google's method of removing "duplicate" content is correct/flawless. Above, you say that Google lets the "best page win". Are you saying the "Best page" is the hijackers url? Clarifiy what you are arguing and stop trying to kiss up to Google while siimultaneously condemning the actions of others/hijackers, etc.


 7:33 pm on Mar 14, 2005 (gmt 0)

The only reliable way to determine which is the ligitimate page (or preferred page) of two duplicates is to compare indexation dates. If page a.html was first indexed in Jan 2001, it should be preferred to the page aa.html first indexed in March 2004, etc.

I like the idea others have proposed...the development of a redirect metatag. If you do not want any other url to redirect to your site and outrank you because of it, maybe a tag along the lines of

<meta name="redirection" content="noredirect">

would help Google determine which page is the intended/original and to NOT allow anything redirecting to that page to be listed in front. If the author of the page is setting up legitimate 302 redirects, then they could set that tag to "redirect" which further authorizes the search engines to use any and all 302s accordingly.


 8:22 pm on Mar 14, 2005 (gmt 0)

hijacked websites, back in.


Are you sure you've been getting the results always from the same dc.
Try different Google IP's.


 8:25 pm on Mar 14, 2005 (gmt 0)

Are you sure you've been getting the results always from the same dc.

What do you mean I've checked lots of DC's and seen this index among others, i'm seeing the good one on google.com (akadns) right now and its stuck for almost an hour.


 8:34 pm on Mar 14, 2005 (gmt 0)

Hello there everyone,

This is my very post, that makes me a newb to this incredible community!

This post is meant for those of you who read through the 1st 47 pages of this thread. (puuuh thats as far as I got, sorry if I am posting anything that has been mentioned within the past 3 pages)

At some point in this discussion folks were pretty close to organize to hijack of a voluntary site etc.

Ok now everybody hold your horses. This has been done already. Googles PR10 even has been hijacked, so we definitly know they are aware of it.

[snip: URL of site that mirrors high PR pages by cloaking.]

The days of PR seem to be counted. It's now worth ALMOST as much as an Alexa rank :)

Time 4 a smoke,


[edited by: ciml at 10:13 am (utc) on Mar. 15, 2005]
[edit reason] No specifics please. [/edit]


 8:57 pm on Mar 14, 2005 (gmt 0)

this hijacking is based on the same error, just not with the same effect as what most people are talking about in this thread.

here, mostly, we're discussing a phenomenon where a site can be removed from the index because of a 302 hijacking and the duplicate content filter.

but i hadn't seen that fake pr10, i wonder if they can do anything with it besides sell fake PR.


 9:31 pm on Mar 14, 2005 (gmt 0)


I certainly don't approve of hijackers, nor of Google methods that allow websites to be moved to a lower position in serps because of redirects of non-owners exploiting a target website's title and content.
My side of the fence is the complaining webmaster victim side, to spell it out for you.

It seems to me the critical moment where a hijack works, is the Google choice of which of 2 duplicates wins on serps rank.
I agree it is oversimplification to say Google's criteria of "best page" is PageRank, the criteria is more complex.

The problem of hijacking seems less prevalent with other se's, who don't use the Google pagerank system.
Does anyone want to venture a theory on how other se's treat duplicate content?

If googlejacking has nothing to do with duplicate content algos, then I seem to have misunderstood the whole issue, and i'd like to hear the reason why hijackers bother with redirects to well ranked websites ("absorbing" their title and content), and why victims (targeted websites) fall out or down serps when a hijacker is at worK.

My argument is, 302 redirects are normal and useful and not at fault.
They are, however, the method used by hijackers to willfully generate duplicate content at Google, to trigger Google to make a choice, advantageous to the hijacker.
I believe Google should revise its duplicate content algo to counter these attempts at duplicate content not initiated by the "real" website (?), which of course is the problem being discussed here from the technical side of the original 302 method used by the hijacker to create that duplicate content.

The problem is Google not the internet protocols, as has been said quite a few times here.
If Google bots our websites (at our bandwidth cost, but hey we want to be indexed!) in order to provide a service to internet surfers, it's only fair we get a serps on merit (Google defines that for us -guidelines for webmasters- so we are self-confident).
Google is making the choice about duplicate content, when it happens.
If there is duplicate content with OUR website's title/content not initiated by us the webmaster, but by someone else, it should NOT be considered and certainly not trigger serps re-ranking as if we were spamming Google serps. That is the issue.


 9:44 pm on Mar 14, 2005 (gmt 0)

Time to put to bed statements like the best page, highest PR... all that stuff matters not at all to hijacking. That's the key part of the problem. If Google had any criteria where the best page wins, then all these root URLs would have no problems with a single link killing them.


 10:12 pm on Mar 14, 2005 (gmt 0)

Time to put to bed statements like the best page, highest PR... all that stuff matters not at all to hijacking.

Exactly. I see some high PageRank sites being victimized by the 302 hijacking. It is Google's problem to sort out, because most of us have done about all we can do.



 10:19 pm on Mar 14, 2005 (gmt 0)

the hijacked sites that i saw back in on google.com are out again.


 10:45 pm on Mar 14, 2005 (gmt 0)

--- If Google had any criteria where the best page wins,... ---

They do!, Adsense Solutions will work Great for everyone if they loose their rankings. Just wait and see how many Small Bis. Owners will start using Other Ways to get on ToP of the PILE. And the next one, and the next one and the next one.....

No FIX - NO CONTENT - Very Simple

I am really disappointed


 11:39 pm on Mar 14, 2005 (gmt 0)

Crobb 305
Post 494

Again, you are incorrect. The "best page" should be the original author, always. Scraper directory sites and hijacking urls that come along and use my content should NOT outrank me.


Crobb, For what it is worth: It dosen't seem to be working that way.

I use many 302's to some of my one page sites Here is what I have seen and can prove.

The page that ultimately wins is the domain that has the links going into it.

Example Domain A has 6 or 8 good links going into it from various places on the net but is not an active website site, just a domain that has old links. If I point A to Domain B, that has no links at all, Google index the content of B under the Domain A and gives A a high rank because of the links.

It works that way all the time, ever since just after "Florida"



 11:52 pm on Mar 14, 2005 (gmt 0)

Are they (G) still denying that this problem exists?

I can only conclude that the fix for this problem must make the results worse than they are now or they would have patched this up already. Either that or it must be very difficult to make the algo interpret 302's as links.

Given how easy it is to exploit this bug, they need to get a move on (with a fix) or their business will be history within a couple of weeks.


 11:57 pm on Mar 14, 2005 (gmt 0)

Okay guys. Here is one more effort at trying to tackle the page jacking problem.
But there is one downside to it which will be explained at the end of the code. The script is in php.

Include the following code in all of your php frontend files.

$time = date("Y-m");

if($ua=='googlebot') {
if($date1!=$time) {
header("HTTP/1.1 302 Found");
header("Location: ht*p://www.yourdomain.com/redirect.php?url=".$url2."&date=".$time);
header("Connection: close");
} }
your normal content goes here

Create the redirect.php with this code.

header("HTTP/1.1 301 Moved Permanently");
header("Location: ht*p://www.yourdomain.com".$url"?date1=".$date);
header("Connection: close");

What the first script checks is if there is a variable called date1 in the string. In this case it is the year and the month. (you can have year,month & day by changing $time=date("Y-m"); to $time=date("Y-m-d");) If it is there then the content is shown else it is temporarily redirected to another file redirect.php with date and the request url passed on to it with get method.

The redirect.php's only job is to permanently redirect back to the referring page by adding the timestamp (in this case year & month) in the url.

1) GBot requests www.yourdomain.com/index.php
2) Index.php redirects to redirect.php like this www.yourdomain.com/redirect.php?url=/index.php&date=2005-03
3) Redirect.php takes the variable and permanently redirects back to your original page changing its url to ht*p://www.yourdomain.com/index.php?date1=2005-03
4)Now when your index.php file is executed, $date1=$time (atleast for a month, if you want to change your homepage everytime Gbot comes then you can include the day also) as a result your normal page is shown.

You can include the script in every frontend php file you have.

1) You url changes every month. Though the change will be only in the eyes of Google. But it will be done in a manner that all your PR passes on to your new URL.

1) You can forget about hijackers, as your url is gonna keep changing. Even if some SEO firm is determined to hijack you and sets up links from lots of sites targeting you within the months gap you can thwart their effort by changing it daily. And if they are so
determined that they up the ante by targeting your site tons of redirects for everyone of your date ranges, it wont work coz its not like Gbot visits them daily or you daily. That is why a months time is enough. And instead of the timestamp you can have any random variable that you fancy.

But are you prepared to have a URL that changes monthly?
And if you are worried about the dynamic url, you can change it through mod_rewrite to a static one like this
Lots of big sites do have URL that is a page long for their homepage.

If your site is already hijacked or in the process of being (if you already noticed it that is) whats your loss in trying it?
If anyone can convert the code to other language so that it benefits others who dont use PHP please post it here.

[EDIT REASON]Forgot to close one of the quotes[/EDIT]


 12:25 am on Mar 15, 2005 (gmt 0)

I think that this is a solution. Not the best but the best available so far.
1, No matter what script is used, googlebot can detect serverside directives.

2, Within this environment, be it 301, 302, 303, 305 (proxy) and 307 the bot must obey that a redirect is indeed been implemented.

3, The bot must ignore the LOCATION FIELD.

4, The bot must take a snapshot of the generated CODE PAGE.

5, The generated CODE PAGE be indexed in google as the final destination of the redirect.

6, The end user can click on the redirect if the user so whishes.

7, If a META REFRESH exists in the generated CODE PAGE then the bot must ignore it.

Simple, effective and a robust solution.

Japanese the suggested solution can be possible only if Google is willing to ignore lots of big sites which emplpy this for their homepage itself.

We all know amazon. But did you know that it amazon.com permanently redirects to here

HTTP/1.1 301 Moved Permanently
Date: Tue, 15 Mar 2005 00:15:21 GMT
Server: Stronghold/2.4.2 Apache/1.3.6 C2NetEU/2412
(Unix) amarewrite/0.1 mod_fastcgi/2.2.12
Set-Cookie: skin=; domain=.amazon.com; path=/; exp
ires=Wed, 01-Aug-01 12:00:00 GMT
Location: h*tp://www.amazon.com:80/exec/obidos/sub
Connection: close
Content-Type: text/plain

and h*tp://www.amazon.com:80/exec/obidos/sub
st/home/home.html temporarily redirects to another page which changes every time the above url is accessed.

HTTP/1.1 302
Date: Tue, 15 Mar 2005 00:17:51 GMT
Server: Stronghold/2.4.2 Apache/1.3.6 C2NetEU/2412
(Unix) amarewrite/0.1 mod_fastcgi/2.2.12
Set-Cookie: session-id-time=1111478400; path=/; do
main=.amazon.com; expires=Tuesday, 22-Mar-2005 08:
00:00 GMT
Set-Cookie: session-id=102-9184878-1332124; path=/
; domain=.amazon.com; expires=Tuesday, 22-Mar-2005
08:00:00 GMT
Location: [amazon.com...]
Connection: close
Content-Type: text/html

check out the time difference between the above and below content and also the redirected location.

HTTP/1.1 302
Date: Tue, 15 Mar 2005 00:18:46 GMT
Server: Stronghold/2.4.2 Apache/1.3.6 C2NetEU/2412
(Unix) amarewrite/0.1 mod_fastcgi/2.2.12
Set-Cookie: session-id-time=1111478400; path=/; do
main=.amazon.com; expires=Tuesday, 22-Mar-2005 08:
00:00 GMT
Set-Cookie: session-id=103-3137957-4566215; path=/
; domain=.amazon.com; expires=Tuesday, 22-Mar-2005
08:00:00 GMT
Location: [amazon.com...]
Connection: close
Content-Type: text/html

So if Google were to follow what you said Nobody will find Amazon ever. This kinda redirecting is very very common. Just check any of the top sites with more than 300k of pages. 302 redirecting is there exactly for this very reason.

I have also seen product link of amazon hijacked appearing in the serps with someones affiliate code. My personal theory is that it is not about the site atall.
If PAGE A redirects to PAGE B it is a fight between the PAGES and not the SITES. No matter what the page is.


 12:49 am on Mar 15, 2005 (gmt 0)

That am**on example is interesting, because it might explain why larger well known sites aren't being hijacked like some others. I had assumed that G had a whitelist of sorts whereby larger sites would be protected... if you didn't make the cut...then you were out. I would like to believe that because larger sites like am**on use these internal redirect techniques somehow they are immunized from 302 hijack. Which *may* also be a clue as to why the 302 hijack exploit can exist actually... so that these better known redirecting sites *can* be indexed to begin with. And, that would explain the difficulty in finding a fix. It would also lend weight to the folks who said they helped thamselves by placing some active content on the page. For a site like mine... a hand coded static site of a brick and mortar company without affiliate plans or any other need for 302 redirect for tracking or active content...just alot of well placed text... it might explain alot.

[edited by: idoc at 1:12 am (utc) on Mar. 15, 2005]


 1:07 am on Mar 15, 2005 (gmt 0)

Any update on publishing news about hijacking or emails send


 1:33 am on Mar 15, 2005 (gmt 0)


Good observation but you are a bit late to point out the flaw in my suggestion. It was previously pointed out that other problems could arise.

However, Are you suggesting that the average site give way to the demands of the big boys like am**on? and what works for them is the most favored option?.

I think still that at least I made a suggestion and it was based on putting an end to the hijacking.

If you look at my post in detail, the loophole is blocked for googlebot to make an error and the hijacker is stopped in his path.

Then google can work on easier solutions to accomodate big sites requirements.

If am**on want to keep moving and directing their pages internally, then surely that is their problem and not the problem of hijacked websites.

Do you hear am**on complaining about the average website being hijacked? could they care less?

My suggestion is a brick wall against hijacking and it would work. Yes, it would have implications.

I would rather see implacations than the hijacking of sites.

Are you aware of how vulnerable your site is?

ps, In defense of Claus, and I hope he does not mind,

He actually meant that the legitimate site is normally worse off and reading his post again I could not see anything wrong with it that seemed double standard that you have suggested. I interpreted that he is indeed on our side of the fence.

OK, I admit, you certainly know your stuff and I raise my hat in honour of that, but our frustration and this gargantuan thread is really all about how google is handling the "302 found".

Can you let us know if you have read this thread from top to bottom?

This 713 message thread spans 24 pages: < < 713 ( 1 ... 7 8 9 10 11 12 13 14 15 16 [17] 18 19 20 21 22 23 24 > >
Global Options:
 top home search open messages active posts  

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved