homepage Welcome to WebmasterWorld Guest from 54.227.41.242
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Visit PubCon.com
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 176 message thread spans 6 pages: < < 176 ( 1 2 [3] 4 5 6 > >     
Adam Lasnik on Duplicate Content
tedster




msg:3192969
 6:06 am on Dec 19, 2006 (gmt 0)

Google's Adam Lasnik has made a clarifying post about duplicate content on the official Google Webmaster blog [googlewebmastercentral.blogspot.com].

He zeroes in on a few specific areas that may be very helpful for those who suspect they have muddied the waters a bit for Google. Two of them caught my eye as being more clearly expressed than I'd ever seen in a Google communication before: boilerplate repetition, and stubs.

Minimize boilerplate repetition:
For instance, instead of including lengthy copyright text on the bottom of every page, include a very brief summary and then link to a page with more details.

If you think about this a bit, you may find that it applies to other areas of your site well beyond copyright notices. How about legal disclaimers, taglines, standard size/color/etc information about many products, and so on. I can see how "boilerplate repetition" might easily soften the kind of sharp, distinct relevance signals that you might prefer to show about different URLs.

Avoid publishing stubs:
Users don't like seeing "empty" pages, so avoid placeholders where possible. This means not publishing (or at least blocking) pages with zero reviews, no real estate listings, etc., so users (and bots) aren't subjected to a zillion instances of "Below you'll find a superb list of all the great rental opportunities in [insert cityname]..." with no actual listings.

This is the bane of the large dynamic site, especially one that has frequent updates. I know that as a user, I hate it when I click through to find one of these stub pages. Some cases might take a bit more work than others to fix, but a fix usually can be scripted. The extra work will not only help you show good things to Google, it will also make the web a better place altogether.

[edited by: tedster at 9:12 am (utc) on Dec. 19, 2006]

 

cabbagehead




msg:3194319
 8:07 am on Dec 20, 2006 (gmt 0)

Adam,

I'm very curious to have you spell it out for us. Let's use a concrete example instead of abstract hypthoticals:

Let's say that I have a page with 1,000 lines of HTML output. 700 of those lines are my template (header + footer + hidden DHTML layers for navigational bar). From the user's perspective those 300 lines of unique content justify the existance of the page but at a purely quantifiable level - the page is 70% the same. And let's say there are thousands of pages that share these 700 lines for the template.

Question: Does this constitute a duplicate page or not? Where is the threshold? Please talk to us about this as *Engineers*, not bloggers just looking for talking points.

Thanks!

whitenight




msg:3194383
 9:44 am on Dec 20, 2006 (gmt 0)

Yes, again, CAN WE GET A CONCRETE NUMBER OR PERCENTAGE?!

Google, rest assured, the spammers will figure it out before any white-hatters will, so how about helping the content providers WHO HELP YOUR BOTTOM-LINE. (You DO want actual content providers to rank higher than spam, right?)

walkman




msg:3194388
 9:50 am on Dec 20, 2006 (gmt 0)

this has the potential for being a major disaster with many innocent sites getting caught for no reason:
Just off the top of my head: stores selling shoes, car parts, posters, or whatever. Sometimes a few sentences are enough as pictures speak for themselves.

Flick type sites, blogs with tags /same post appearing on many pages, similar products on each page, etc. etc.

It will not be pretty--and I think Goog did this on 12/15 already--at least initially, till they hone it down.

Adam, short of being on page 20, how else do we know that the page is a "dupe"? At least you should move the page to supplemental so we get a chance to fix it before having everything nuked from the serps...

idolw




msg:3194402
 10:00 am on Dec 20, 2006 (gmt 0)

I guess what Adam wants to tell us is "stop overusing internal linking as we have trouble working it out".

As they still base their results on links it is pretty easy to put empty page high in the SERPs as long as you have enough links to it. (Tried it and had a good laugh with a big travel keyword.)

Adam, if you want us to put menus into jaascript so that you do not follow it, just tell us ;-)

whitenight




msg:3194413
 10:11 am on Dec 20, 2006 (gmt 0)

I guess what Adam wants to tell us is "stop overusing internal linking as we have trouble working it out".

Haha, stop reading between the lines Idolw. ;)

The problem with Google's logic on this issue is that it's good for the users (remember that mantra, G?)

I bet every site owner on here can show better sales, click-thrus, etc when repeating navigation elements in SEVERAL places on the page.

Humans are funny like that. So unpredictable in their inability to find what they are looking for, unless you place it 2, 3, 4 times on a page.

Guess how many times customers call our companies asking if we have "blue widgets" even though we have 2 "blue widgets" anchor texts on our "green widgets" and "red widgets" pages.

steveb




msg:3194439
 11:14 am on Dec 20, 2006 (gmt 0)

Unfortunately the real thing to be concerned with here is not Google detecting what they want to detect, but rather how they are currently failing in this mission.

One thing Adam mentions "so in the vast majority of cases, the worst thing that'll befall webmasters is to see the 'less desired' version of a page shown in our index" is an example yet again of Google thinking they are doing a good job when they are doing a bad job.

The most common way I have seen Google completely failing in this is with photo pages. Note to Google: a photo page of George Washington and a photo page of Abraham Lincoln are NOT duplicate content! Duh. Sadly, many photo sections of websites are being deindexed by Google because of yet another half-baked idea that doesn't consider user experience. Photo pages with unique titles, unique descriptions and unique alt text should be indexed, even with no other text but navigation/template stuff. Why? Because the page content being presented to users is entirely unique. These are not duplicates, even more obviously because the photos are unique files like photo1.jpg and photo37.jpg.

And this is basically irresponsible: "Don't fret too much about sites that scrape (misappropriate and republish) your content." Google has killed whole directories of sites because they have been wholesale stolen by garbage sites. This is simply a sad thing to appear in the official Google blog. Webmasters should fret a lot about stolen content. It kills pages, directories and whole domains. Google doesn't fret over it because they are in denial about how inept their search engine is, but when you lose income because Google prefers theives to original content (because they love thousands of blog comment links to that content rather than a half dozen high PR authority links) you start being concerned about reality instead of fantasies.

Sadly for everybody, it seems Google still is in fantasy/pretend mode, rather than "we need to fix our pitiful search engine" mode.

fishfinger




msg:3194457
 11:49 am on Dec 20, 2006 (gmt 0)

These are not duplicates, even more obviously because the photos are unique files like photo1.jpg and photo37.jpg

Well, yeah, but it's been fairly common knowledge since search engines came out that they don't read images!

So perhaps a unique description, alt & title on the image and some text on the page underneath describing the image would help?

No, I guess you're right that a far better system is for Google to assume that the images are unique because they have different file names :)

mattg3




msg:3194462
 11:58 am on Dec 20, 2006 (gmt 0)

Isn't this why they do this wee image game, where people work for free to enhance Google. In one of the Google Talks, some guy showing his PhD work, gave them the idea it seems.
[video.google.co.uk...]

You can opt getting your pictures shown in this game in the sitemap interface.

I think it's always worth watching the Google Talks, sometimes you actually learn something. :)

[edited by: mattg3 at 12:25 pm (utc) on Dec. 20, 2006]

steveb




msg:3194464
 12:01 pm on Dec 20, 2006 (gmt 0)

"I guess you're right that a far better system is for Google to assume that the images are unique because they have different file names :)"

Or maybe you could actually read the post first, since I said pages are still dropped despite having unique titles, unique descriptions and unique alt text. Good example of mimicing Google's behavior though!

mattg3




msg:3194471
 12:19 pm on Dec 20, 2006 (gmt 0)

Or maybe you could actually read the post first, since I said pages are still dropped despite having unique titles, unique descriptions and unique alt text. Good example of mimicing Google's behavior though!

The video quality isn't that good
but : Content-Based Image and Video Retrieval
[video.google.co.uk...]

They might be using a certain dataset [dunno if a talk equals likely implementation] that defines after a lot of iterations variables for a mathematical statistical model, posterior.

This is a statistical model that has an error margin. Sometimes this error is huge. Also if the model is wrong you will be off track.

I wouldn't wonder if they have several concurrent ways of trying to classify images and check out which one is the best.

Bayesian stats are "we have no clue what else to do stats". Sometimes they work excellent, sometimes they tell you nothing.

djmick200




msg:3194477
 12:30 pm on Dec 20, 2006 (gmt 0)

cabbagehead says:

Let's say that I have a page with 1,000 lines of HTML output. 700 of those lines are my template (header + footer + hidden DHTML layers for navigational bar). From the user's perspective those 300 lines of unique content justify the existance of the page but at a purely quantifiable level - the page is 70% the same. And let's say there are thousands of pages that share these 700 lines for the template.

Question: Does this constitute a duplicate page or not? Where is the threshold? Please talk to us about this as *Engineers*, not bloggers just looking for talking points.

Id be very interested hearing a comment on this also. To the end users pages are unique, to a bot they have 70% shared code.


Use TLDs: To help us serve the most appropriate version of a document, use top level domains whenever possible to handle country-specific content. We're more likely to know that .de indicates Germany-focused content, for instance, than /de or de.example.com.

Also a further comment on this quote would be helpful e.g. a .co.uk tld with content that is NOT specifically aimed at the UK, would that site be more likely to rank lower worldwide? would google interpret that site as being UK focused?

BeeDeeDubbleU




msg:3194484
 12:43 pm on Dec 20, 2006 (gmt 0)

You DO want actual content providers to rank higher than spam, right?

So you think there are no spammers in here? I think Adam knows different ;)

jetteroheller




msg:3194487
 12:49 pm on Dec 20, 2006 (gmt 0)

Most pages from me are living from 600x450 pixel large pictures.

Up to 70 chars title, 180 chars description and the big picture
and some 100 chars context description to tell by text what is on the photo.

I just look on the code of a typical page with a picture

2566 Byte head
1222 Byte main content
3115 Byte part of navigation with search box and AdSense
850 Byte description of content
12040 Byte navigation

The navigation is necessary, and also with a total of 57 links good below the 100 limit.

What expects Google now?

To hide all the navigation in javascripts, so only people with javascript turned on could surf on this page?

My site navigation will remain like it is.
I am now since June 27th chaning my site by what could Google find bad.
But at this, a certain border line is crossed where I have to say NO.

whitenight




msg:3194494
 1:03 pm on Dec 20, 2006 (gmt 0)

So you think there are no spammers in here? I think Adam knows different ;)

This is exactly my point.

Google refuses to give out "precious" information that would help white-hat content providers/e-commerce improve their sites for fear of spammers exploiting that knowledge.

Who do they think they are kidding?!

The spammers are already testing and tweaking at a much faster rate than any Google employee or White-hat site is. If it wasn't so, by defintion, there would be no spam

So instead, the spammers (a smarter bunch than..oh nevermind) who are already on the front lines of "what really works" are always a step ahead of G engineers and at least 2 steps ahead of white-hat sites.

One would assume (this is a VERY BIG assumption) that the algo would give whitehat sites more "quality" points to prevent spam sites from ever ranking, if they had the proper knowledge

But instead, Google plays this silly game of spraying insecticides to kill off the cockroaches in the garden. "And oh oops, we killed off the nice tomatoes that were growing and the cockroaches always come back after a few days, but hey at least we can say we're trying...."

[edited by: whitenight at 1:06 pm (utc) on Dec. 20, 2006]

tedster




msg:3194496
 1:05 pm on Dec 20, 2006 (gmt 0)

No one from Google has said to hide navigation with javascript. No one from Google has even said that having an identical menu on every page was a potential duplicate content problem. That's just the conjecture of a few people, but it makes no sense to me, either. I'd say you are thinking clearly on this, jetteroheller.

[edited by: tedster at 1:56 pm (utc) on Dec. 20, 2006]

sja65




msg:3194506
 1:20 pm on Dec 20, 2006 (gmt 0)

Tedster - Adam specifically said
Minimize boilerplate repetition: For instance, instead of including lengthy copyright text on the bottom of every page, include a very brief summary and then link to a page with more details.

I would read this as including things like menus and other navigation features.

mattg3




msg:3194513
 1:35 pm on Dec 20, 2006 (gmt 0)

No one from Google has said to hide navigation with javascript. No one from Google has even said that having an identical menu on every page was a potential duplicate content problem.

Don't people try to fill that information vacuum? Whole religions have been based on information vacuum. I think people would appreciate that the commitment to information sharing would be extended to concrete data.

I think the problem is, if you down the communication lane it gets possibly more frustrating if the information is not exact or inefficient.

On the other hand having had the unfortunate experience to deal with Yahoo staff that dumped my gfs email, they are worse.

Maybe criticism mentioned here, certainly by me, kinda includes the hope and the acknowledgement that there might be someone reasonably benevolent and intelligent listening.

On the other hand Google does critise any webmasters web pages, sometimes harshly and for no apparent reason. So a somewhat harsh response is often a reaction and not an action.

Communication will remain difficult with "partners" of differing empirical backgrounds and power over the subject in question.

photopassjapan




msg:3194514
 1:37 pm on Dec 20, 2006 (gmt 0)

That "photo sites" part of steveb's post made my heart skip a beat :)

Even though the rest of the content is but the navigation, album title, album navigation, and the footer...

So far unique titles, descriptions in the meta tag, on the page itself as text ( caption, description, call it what you will ), in the alt text of the photo, and in the alt text of the thumbnail pointing to the page have been enough to communicate the theme to G.

So far :P

...

Am i the only one reading this into Adam's lines or did he say that (from now on) they'll NOT be issuing any penalties if the duplicate content is within the same domain?

Only exclude all but one page of which are identical.
Makes perfect sense.

This is the recognition ( of the obvious ) that a dupe page of one's own content WITHIN the same domain is NOT an indication of spamming ( there's no connection there ), and doesn't need to be site-wide penalized.

Ride45




msg:3194516
 1:41 pm on Dec 20, 2006 (gmt 0)

sja65
You're reading into it 100% wrong and so are the other people who believe this.
-> Global navigation (headers, footers, etc.) make for consistent usability. They are likely part of your page template, whether you have 10, or 100,000+ pages.
It's the elements that you decide to include as part of the template which become boiler plate and refer to large blocks of text.

For example, webmasters will sometimes write a block of text to include in the bottom of their home page with lots of beefy keywords, and on-page relevant language. Then they will get lazy and instead of having/writing a unique block of text on interior pages, they will just re-use the same block, perhaps on thousands, or all pages, making a part of the global template.

-> Adam says remove this and just a have a link to a single page with all of this text.

-> Adam's comment on the JavaScript menu was an assumption that the JavaScript menu didn't work and it should be fixedy the person who created it!

mattg3




msg:3194522
 1:44 pm on Dec 20, 2006 (gmt 0)

Am i the only one reading this into Adam's lines or did he say that (from now on) they'll NOT be issuing any penalties if the duplicate content is within the same domain?

Why mention it then at all?

It seems to me as if, when you have unique information, ie the short explanation of a medical term and you need to have a footer aka disclaimer that you need to prevent that someone sues you and the disclaimer is longer than the original text, these pages get nuked.

How dangerous the suggested "solution" is to put it on another page is apparent. Surely someone not being able to spot the link might do something stupid based on that medical/legal information and probably might then sue you. It's kinda then up to the judge in whatever country you are in if he thinks that medical/legal disclaimer is OK to be on an extra page with their text to brail or speech engines.

Maybe another solution would be to make the disclaimer text an image, although you will then discriminate against and exclude blind people.

photopassjapan




msg:3194536
 1:57 pm on Dec 20, 2006 (gmt 0)

...?

DO they get nuked?
Did you see this happen?
If you did, are you sure it was dupe content?
dupe--->nuke... look up what "nuke" means in japanese ;)

I'm not sure if this is right, but the thresholds whitenight keeps on requesting out of fun ;) ( i kinda agree with everything in your posts... it's scary )... seem to be at a reasonable level.

On the other hand...

You don't suppose G will go through the net, and do string-searches on each and every page, for every block of text, and compare it to each and every page in the database... then start again with the next paragraph? If the title, meta are similar or same, if the filesize is the same, if the... ah whatever... :P

edit: don't forget the alt text on disclaimer.gif :D

mattg3




msg:3194551
 2:16 pm on Dec 20, 2006 (gmt 0)

edit: don't forget the alt text on disclaimer.gif :D

How do you know what google uses as text in their dual content crusade?

<img src="ds.gif" border="0"> < 40 words.
<img src="ds.gif" border="0" alt="bla ..."> given that the tags are there you would end up with something bigger than the original version.

mattg3




msg:3194581
 2:45 pm on Dec 20, 2006 (gmt 0)

You don't suppose G will go through the net, and do string-searches on each and every page, for every block of text, and compare it to each and every page in the database... then start again with the next paragraph? If the title, meta are similar or same, if the filesize is the same, if the... ah whatever... :P

We don't have 2000 apache tutorials any more in the index, so they will do something like that.

a 100% identical text needs just one check, similar as you do in your G search. Then the text will be checked further (Guess obviously).

Somewhat similar text might be checked in a different way,checksums and so on. One word query 0.13 seconds, 6 word query 0.10 seconds in your G search.

Very rough extimate:

Lets say 1.000.000.000 pages search on each 0.1 seconds = 100000000 seconds, split on 1000 servers = 100000 seconds = 27.78 hours, feasable, and if they have more servers that time is reduced + higher PR gets more attention and if you have a 9 word search string matched on 40 other pages that page might get dumped immediately. IE apache docu or all the news that were replicated on 100 pages in the past. There are ways to reduce what needs to be investigated.

photopassjapan




msg:3194658
 3:36 pm on Dec 20, 2006 (gmt 0)

a 100% identical text needs just one check

Somewhat similar text might be checked in a different way,checksums and so on.

Even if it's technically feasable ( which i still don't think )... that's not the point. The point was WHAT to check ( and exclude as duplicate, ie. not needed for the 1000000th time ), right?

And ways to determine whether it's dupe content, in other words content featured in this length/context/for this reason/use... which is found ELSEWHERE already, or whether it's like your navigation... repeated of course, but within your own domain, and because it's necessary... offset by unique titles, metas and of course ADDITIONAL content. I don't think the boilerplate comment from Adam was a warning, it was more like a request. If boilerplates take up the entire page, that's a different story.

...

The extreme is to check everything. For example a paragraph that has no more than 25 words... your usual meta description. If you check it against all other pages ( i.e. do a search on this string ) it will or will not show that other pages have this text included.

But what if you check only 24 words length? That's two more searches.
23 words...? That's another 3 searches.
22 words... is another 4.
21 words... is another 5 ...etc.

And most paragraphs have plenty more than this. Most pages have more than one paragraph. Or let's drop the idea of blocks of text, and check content with disregard of the page layout... it's far too easy to game, right? Assuming G goes for "similar" as well, they could be making an infinite number of searches, and by the time they finish, a brand new set of pages will appear with text they need to classify the same way. A 25 word string check takes about 0.3 seconds. And i don't think they can distribute this on their servers, for they're not synchronised. They'd need to do this on all servers at the same time, cross-checking the datasets. Also there are way more pages to check ( last number i remember was around 15.000.000.000 ). Although having high PR pages be checked first would make sense if they were out to get YOU, it doesn't make sense when judging the daily trillions of pages whether they're SPAM or not, comparing them to the rest of the pages. Meaning the way you described this'd mean that it would be low-PR spam that is kept. Especially scraped spam.

The whole point of dupe filters was to get rid of spam.

...

And how do i know they're not checking for alt text? I don't KNOW, it's a gut feeling perhaps.

But assuming people at G are sane, which they are, they're just trying to get rid of garbage from their spam-inflated index and SERPs before either users or their hardware calls it quits.

Furthermore ALT text doesn't have much feedback on web search, while web search does classify image search to a great extent. In which a dupe filter is out of question. But i was making a joke, and don't think you should pull down the text disclaimer and add the image just yet...

Dupe content filters aren't there to fuel paranoia but to filter out spam. Besides. The fact that there are no penalties makes it easier to pinpoint whithin a legit site, and a legit site only, what would trigger exclusion.

If i was at the plex, the first thing i'd check would be scraped content from wiki, dmoz, syndicated content from major ecommerce sites, RSS feeds in GENERAL, and so on... appearing on LOW PR pages.

And not a completely legit disclaimer for medicine. Not static html on age-old high PR non-directory, non-syndicated sites that have some important message that needs to be repeated so the simple minded users too can find their way.

Do you think these disclaimers are NOT included on major companies' pages? If they're not, don't you think if someone wants to sue such a company they'll pick THEM with the big bucks? :)

...

While G may be in the dark for some things, it's not ill intentions that drive the algo engineers or the spam team. If something is as unreasonable as to cause an effect of such disclaimers being marked as dupes, they'll need to correct it.

But again... did you see this happen?
And if so, are you sure it was dupe content that got pages excluded?

Mind you i'm not an expert on anything.

So let here be a disclaimer ( that is in fact featured with more or less the same wording below other posts of mine... risking WebmasterWorld pages to be excluded )...

I could be COMPLETELY wrong.

mattg3




msg:3194670
 3:53 pm on Dec 20, 2006 (gmt 0)

And not a completely legit disclaimer for medicine. Not static html on age-old high PR non-directory, non-syndicated sites that have some important message that needs to be repeated so the simple minded users too can find their way.

Well it was them that mentioned large disclaimers. It seems that legit disclaimers will cause problems. So if you expand on that idea a bit there are many possible disclaimers, that would then equally be as problematic.

The whole point of dupe filters was to get rid of spam.

Sure, but as with social security abuse, if you tighten the system for abusers usually the legits get hit.

Then we have issues that if you type in definition turkey look at the first link of a certain site that takes the mickey out of all these duplicate content issues, is publicly listed and makes millions. There you might com to some answers. ;)

walkman




msg:3194711
 4:27 pm on Dec 20, 2006 (gmt 0)

>> No one from Google has said to hide navigation with javascript. No one from Google has even said that having an identical menu on every page was a potential duplicate content problem. That's just the conjecture of a few people, but it makes no sense to me, either.

sure Tedster, but menus are "boilerplate repetition" with up to 100 words in many cases, so it's not that far fetched. I think, and hope, that Goog keeps in mind that not all sites have 1000 word articles on every page.

tedster




msg:3194731
 4:44 pm on Dec 20, 2006 (gmt 0)

Every major search engine, including struggling little old MSN, is working with dividing pages into blocks and then separating the common template elements out for separate analysis. Google's been doing it for years. They can recognize a templated menu appearing across the site for exactly what it is.

This idea is just plain nuts and I want to stop the growth of this new "religion" before it gains any more members. How the heck would people navigate a site without common menus for each page?

May Inigo Montoya II strike me dead if I speak an untruth here.

whitenight




msg:3194739
 4:54 pm on Dec 20, 2006 (gmt 0)

Every major search engine, including struggling little old MSN, is working with dividing pages into blocks and then separating the common template elements out for separate analysis. Google's been doing it for years. They can recognize a templated menu appearing across the site for exactly what it is.

No offense Tedster, but i'd prefer if Adam answered this question as has been asked at least 4 times in this thread.

Because I can find examples either way that support both theories.

jetteroheller




msg:3194741
 4:58 pm on Dec 20, 2006 (gmt 0)

How the heck would people navigate a site without common menus for each page?

Maybe Google wants, that they navigate only by the AdSense ads on the page?

pageoneresults




msg:3194763
 5:14 pm on Dec 20, 2006 (gmt 0)

No offense Tedster, but i'd prefer if Adam answered this question as has been asked at least 4 times in this thread.

I really don't think we are going to get a specific answer to this one.

But, think of it this way. Almost every single website out there is designed using a certain percentage of replication across pages. I agree with tedster that the search engines can easily detect what is navigation and what is content. It's the content portion that we should be discussing, not what appears on each page naturally as part of the design.

rohitj




msg:3194796
 5:32 pm on Dec 20, 2006 (gmt 0)

You also have to realize that many of you are thinking that google can't tell the difference between html, javascript, and meaningful words. If you have the same html/javascripts on each page of a domain, then chances are that's the template of the site--a very large percentage of sites use templates. They are not going to penalize for that and, if they did, they'd be penalizing a very large portion of their index. That blatantly defeats the very purpose of a penalty.

i'm willing to bet that they implement some learning algorithms, that can figure out consistent menus/templates/structural aspects of a site and ignore such aspects when determings the SERPS. Its not a hard thing to do and they have the computing power necessary to crawl each site in that type of depth.

This 176 message thread spans 6 pages: < < 176 ( 1 2 [3] 4 5 6 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved