homepage Welcome to WebmasterWorld Guest from
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 189 message thread spans 7 pages: < < 189 ( 1 2 3 [4] 5 6 7 > >     
Google "Cache" Not Copyright Violation

 3:47 pm on Jan 26, 2006 (gmt 0)

I believe this is the most important legal ruling - maybe in the history of the internet:


A Nevada federal court has ruled that the cached versions of Web pages that Google stores and offers as a part of many search results are not copyright infringement.

Clearly, the court did not understand what real caching is and what Google calls caching. I do not thing Googles meets the crieteria for caching:

The material described in paragraph (1) is transmitted to the subsequent users described in paragraph (1)(C) without modification to its content from the manner in which the material was transmitted from the person described in paragraph (1)(A) {FN104: 17 U.S.C. 512(b)(2)(A)}

What this does I think, is effectively neuters all copyright laws on the internet today. It is the wild-wild west again.

With all that Google has done that is good - I don't know how we could be so far apart on this one issue.

Blake Field: (who brought the suit):



 12:14 am on Jan 27, 2006 (gmt 0)

I see this time and time again (and I only read up to page 3 so far)

>> By putting a simple 'no cache' tag....

How do you put that tag on an image, or on a PDF or Word document; or does this ruling apply to HTML content only?


 12:19 am on Jan 27, 2006 (gmt 0)

aehm, just one thought to that:

google is not taking my content and putting its adsense above it, others do.

google is not linking to the cache from static pages with high PR, others do.

i have no problem with google caching everything i put online. with the others i do!

in many countries around the world the damage indemnification you would pay for unlawful action would be the damage you have caused.

that means, if you drop your coffee in a fast food restaurant and burn yourself, because the restaurant did not write on the cup, that the stuff is really hot, you are entitled to get the cleaning paid. maybe the visit to the doctor, but that would be a tough fight in court already.

if someone caches my pages and does not add his income streams all around it, i consider that OK, because that entity does not earn cash with my content. if they do not link to the pages from high PR.

The real problem are people stealing content and abusing it for their income. The "caching" legal ruling was supposed to be a good weapon for the hard working "content creating" webmaster. That weapon just went nil.

But is caching really "scraping"? I doubt so!



 12:47 am on Jan 27, 2006 (gmt 0)

An additional thought that separates router caching, scraper caching and search engine caching.

- Routers cache data in much the same way that a postman delivers a publication inside an envelope. The form remains unchanged from posting to delivery. The analagy breaks down where an ISP keeps a local copy to deliver to all of their users (a bit like the postman carrying spare "shrink wrapped" copies for anyone who asks for one, I suppose). The URL for accessing the content always remains the same as the original URL where the content was orignally published.

- Scraper caching presents a public copy that can be looked at by humans as well as be reindexed by search engines. It creates an alternative URL for the content, with that alternative URL being indexed by other systems; that alternative URL may become "more well known" than the "real" URL for that content (aka hijacking).

- Search engine caches (Google, Yahoo, etc) present a local copy for their users but they do not allow other systems to [automatically] reindex that content under the search-engine-based URL - and I believe that to be an important point. However, the fact that the cache does have its own URL that could be referenced by third parties is not lost on me.


I guess this ruling makes it even harder to get Google to stop showing a cache from January 2004 for content that was removed from a website back in February 2004 because that data was just plain wrong. Google seem to want to keep a copy of that very old incorrect data, forever, even though the page in question has been updated 15 times since then and the data on the current page is now 100% correct. Google does spider the site every week, and has updated their cache several times each month for several years now. This bit is important: Depending on the search term that you use, you either get to see a modern snippet and cache, or you get to see a supplemental result with an ancient snippet, and that ancient result has a copy of that ancient (and incorrect data) cache to go with it.

The fact that archive.org also caches complete websites is not lost on me, but for archive.org they keep every version of the site and they clearly flag the spidering date next to each one, and make it easy to get to all the other versions; it is blindingly obvious that there are earlier and/or later versions than the one that you are looking at. Google does not do that. You get one dated cache for your particular search result, but may get another cache (from a different date) of that page for some other search result. Having no control over that is now becoming a major problem, where some of those versions are for page content that has been dropped because it is incorrect.

I wonder how Google gets on by having a cache copy that mentions the real name of hacker 'Tron' for example; and the fact that Google will keep that cached copy even after WikiPedia might erase the details from that page, or might erase the page completely from their site.

[edited by: g1smd at 1:02 am (utc) on Jan. 27, 2006]


 12:52 am on Jan 27, 2006 (gmt 0)

that means, if you drop your coffee in a fast food restaurant and burn yourself, because the restaurant did not write on the cup, that the stuff is really hot, you are entitled to get the cleaning paid. maybe the visit to the doctor, but that would be a tough fight in court already

If the restuarant knows that what they are doing poses a serious risk, then there can be liability, as there was with McDonalds.


If nothing else, and with all due respect to DaveATIFG, at least one SEO myth [webmasterworld.com] should bite the dust with this decision.


 1:05 am on Jan 27, 2006 (gmt 0)

From that thread [webmasterworld.com]:

>> >> and I don't want an old version of my site wandering around in google. << <<

>> Don't worry, neither do they! <<

I refer you to my post immediately above yours, and to all of the very many discussions of ancient supplemental results and so on in the last year or so.


 3:41 am on Jan 27, 2006 (gmt 0)

What this does, is effectively neuters all copyright laws on the internet today. It is the wild-wild west again. It legalizes content theft.

I agree Brett. This is a sad day for the internet. Hopefully, sometime very, very soon, a judge with an understanding of how the internet "really" works will see the word "cache" and truly understand all its implications.

Until then, I suppose its every man for himself and look out world! Steal whatever you want from anyone and let the courts sort out the resulting mess. No rules ... no holds barred ... just go for it!

The judge (in my humble opinion) is an educated man who simply doesn't "get" the Internet. Let's hope his first born grandchild will fill him in someday soon!

Until then, we will all have to be ever vigilant of our proprietary rights and ensure that we look after them at all costs.


 6:02 am on Jan 27, 2006 (gmt 0)

In this case "looking over one's proprietary rights" will be no problem: it is as simple as opting out of caching. As the judge knew that the plaintiff knew -- so, in polite legal language, she told the plaintiff, "you bozo, why are you taking up valuable oxygen in my courtroom asking for relief that, even BEFORE all your inane babbling, you could have gotten for yourself for free?"

That is a basic legal principle. "What could the plaintiff have reasonably done to mitigate the alleged damages? Would that action have been simpler and cheaper than hiring lawyers?"

But that wasn't all. This was not a close call. In order to defend its activity, Google had to prevail on only ONE of six different issues -- the judge came down on Google's side on ALL of them (as well as on a couple of others which looked to me like blind alleys). The decision could be reversed on five counts--not likely that it would be--and Google would still win.

[edited by: hutcheson at 6:08 am (utc) on Jan. 27, 2006]


 6:06 am on Jan 27, 2006 (gmt 0)

I dont know why everyone is getting hot and bothered with all this since there has been a site on the net for the past half a decade which is continously caching entire sites and KEEPING those caches. We are not talking about a google cache either, we are talking about almost complete websites.

Cant give the URL out cause of TOS but run a search for web archive in google and youll hit it, its better known as the "waybackmachine". This site has been caching millions of sites tagged from seach engines for years. My own site has almost complete copies of it's various looks under our old URL's which date back as far as 2002, and im talking about complete copies, not just the looks. The whole site is there. Sometimes the images are even there, cached on there server, not coming from ours.

Of course the waybackmachine can be blocked by the robots.txt file, but they still have over 40 billion pages according to there site. If anything was in breach of copyright that site would be. Google isnt caching to an extent where they rip an entire site, but the waybackmachine is.


 6:09 am on Jan 27, 2006 (gmt 0)

I think it's ok because although well hidden, there's a tag you can use so they cannot/won't cache you...


[edited by: Woz at 6:22 am (utc) on Jan. 27, 2006]
[edit reason] No SIGs please, see TOS#13 [/edit]


 12:02 pm on Jan 27, 2006 (gmt 0)

Katie, I mentioned archive.org in my post above.

I think that a very important difference between what Google and the waybackmachine do, as compared to scraper sites, is that Google and archive.org do NOT allow their cached copy to be re-indexed by other systems.

Imagine a world where Google indexed the Yahoo cache of the MSN cache of the Google cache of your website; and every other combination of caching the cache had also been done by all of those. Nightmare!


 12:20 pm on Jan 27, 2006 (gmt 0)

I can only speak for myself of course.

I WANT Google, MSN and Yahoo to cache my pages.
I do NOT want other websites to scrape my content or hotlink my images.

I see nothing in this legal decision that permits or legalizes scraping, infringement etc.


 1:22 pm on Jan 27, 2006 (gmt 0)

Worth a read for site owners:


Question: What are the criteria a service provider must satisfy in order to qualify for safe harbor protection under Subsection 512(a) of the Digital Millennium Copyright Act?

Answer: Subsection 512(a) provides a safe harbor for service providers in regard to communications that do not reside on the service provider&#8217;s system or network, but merely pass &#8220;through&#8221; the system or network. Any copies of the communications on the system must be temporary, i.e., &#8220;intermediate or transient.&#8221;

A service provider must satisfy the following critical elements in order to qualify for the &#8220;safe harbor&#8221; or protection from liability provided by subsection 512(a) (note that subsection 512(k)(1)(A) defines &#8220;service provider&#8221; as used in subsection 512(a)):

(a) The service provider is an entity offering the transmission, routing, or providing of connections for digital online communications [512(k)(1)(A)];
(b) The service provider did not initiated the transmission of the material [512(a)(1)]
(b) The transmission, routing, provision of connections, or storage is carried out by an automatic technical process [512(a)(2)];
(c) The Internet user, not the service provider, must select the origination and destination points of the communication [512(a)(3) and 512(k)(1)(A)];
(e) The service provider must not modify the communication selected by the Internet user [512(a)(5)];
(f) The communication is transmitted &#8220;through&#8221; the system or network of the service provider [512(a)(2)];
(f) No copy of the communication is maintained on the system or network in a manner ordinarily accessible to anyone other than anticipated recipients [512(a)(4)]; and
(g) No copy is maintained on the system or network in a manner ordinarily accessible to anticipated recipients for a longer period than is reasonably necessary for the transmission, routing, and provision of connections [512(a)(4)].


 1:27 pm on Jan 27, 2006 (gmt 0)

That is a good definition of what an ISP does, and is clearly not what the Google (or any other SE) cache does.


 1:53 pm on Jan 27, 2006 (gmt 0)

> how do I register as "isp"


Online Service Providers
Service Provider Designation of Agent for Notification of Claims of Infringement

The Digital Millennium Copyright Act, signed into law on October 28, 1998, amended the copyright law to provide limitations for service provider liability relating to material online. New subsection 512(c) of the copyright law provides limitations on service provider liability with respect to information residing, at direction of a user, on a system or network that the service provider controls or operates, if the service provider has designated an agent for notification of claimed infringement by providing contact information to the Copyright Office and through the service provider&#8217;s publicly accessible website.


 2:37 pm on Jan 27, 2006 (gmt 0)

I'm probably being quite dumb here (nothing unusual), but don't we all use cache in different ways every day?

Taking the subject to its most basic end ... if all caching was prohibited, we would have to do away with our most basic tool for instant data recovery ... our memories.

It just crosses my mind that I doubt that there is a time limit applied to cache. So maybe even reading a web page would constitute holding a cache of someone elses work.

Oh I don't know ..... Ever decreasing circles with this one ;-)


 2:41 pm on Jan 27, 2006 (gmt 0)

If I write the 3-page document someone mentioned earlier, and do not put a copyright notice at the bottom, and leave copies in a public place, then it is perfectly legal for anyone and everyone to make their own copies. If I put the copyright notice on it, then from a legal standpoint no one can copy it.

This is been the rule in dealing with "physical" media (paper, recording discs, etc) since the beginning of copyright laws. What has created the gray areas in recent years is the ability to have "non-physical" media, such as online MP3 files, video files, and in this case web pages. In the entertainment world this has led to various DRM schemes in order to protect the copyright holder, in recognition that the use of the traditional "physical media" copyright notices just do not map into the electronic technologies. Putting the traditional "(c) 2006 my name" is not enough to protect the electronic copying of a document - but it is good to do because it protects the physical copying of any physical copies made of that page. So, why is it so bad to expect someone to use an available facility such as the NO CACHE tag when they are knowingly placing their original work on a medium (the internet) where it is going to be electronically available, and also when they go out of their way to put the traditional copyright notice on their pages? It's not like anyone is (or should) be walking around wondering "how the heck did my web page get on Google?" after they publish it to their website.

It has always been the responsibility of the content creator to display AND protect his copyright rights. If I write something, copyright it, and publish it, and then continually ignore infringements that I become aware of, then I will loose my standing as the copyright holder.


 2:49 pm on Jan 27, 2006 (gmt 0)

with ref to kinhunters post ..as regards current copyright law
your para 1 is incorrect
your para 2 is incorrect
your para 3 is incorrect

thanks brett for an interesting and important subject ..now if only you could find a way to restrict posting ability to those who actually know something of what they are talking about it would be even better ..and less prone to sidetracking to "teach them"


 3:27 pm on Jan 27, 2006 (gmt 0)

> I'm probably being quite dumb here (nothing unusual),
> but don't we all use cache in different ways every day?

Yes WE do. What the search engine do, is not caching. They are republishing the original work with their own advertisement at the top and in the address bar. Search engines do not meet any definition of caching.

Because I call a pear an orange, does not make it so.


 3:39 pm on Jan 27, 2006 (gmt 0)

Surely somebody, of greater importance than us, thinks that the pear is simply a pear and an orange is an orange.

If the law doesn't pertain to the specific storage process used by search engines, then won't they be prosecuted using a different law if they are still infringing someone's copyrighted work?


 4:00 pm on Jan 27, 2006 (gmt 0)

Someone else will sue in a different Federal court using a more intelligent arguement. It isnt over.


 4:05 pm on Jan 27, 2006 (gmt 0)

It is important first to note that this is a federal district court ruling. Might another court consider it when evaluating a similar issue? Possibly. Is it binding precedent anywhere outside of that particular federal district? No.

It is also a very limited ruling. If a different case had been presented, there would be arguments against caching which seem to me to be much stronger. The opinion indicates that Blake Field chose not to present those theories (see page 9, footnote 8) instead proceeding on a claim of direct copyright violation - while stipulating that Google's creation and maintenance of its cached copy was not an infringing use. (See the last paragraph of page 9). Blake claimed that "Googe directly infringed his copyrights when a Google user clicked on a 'Cached' link to the Web pages containing [his] copyrighted works and downloaded a copy of those pages from Google's computers."

Within that context the judge effectively treated Google as if it is a photocopier or a VCR - it can be used to infringe copyright, but that capacity alone does not render the device guilty of copyright violation. A manufacturer of a photocopier is not liable when somebody inserts a page of copyrighted material and presses the copy button.

The theory certainly could be presented that Google is guilty of indirect copyright violation, in the same manner as certain P2P services have been found to have done, by abetting the infringing use - that is, it could be argued that Google has done the equivalent of inserting the copyrighted material into the photocopier, thereby abetting any person who wishes to violate the copyright. But "Field did not contend that Google was liable for indirect infringement (contributory or vicarious liability)."

The judge's further analysis of the defenses is quite fact-dependent. The judge found an implied license on the basis that he knew Google would cache his site, knew how to prevent it from happening, and make a conscious decision to permit the copying. This analysis would not, for example, apply to a scraper not specifically known to the webmaster, where the webmaster does not know have actual knowledge as to how the scraper will use the copyrighted works, or which does not respect industry-standard meta tags. The estoppel argument would apply only in relation to defendants the publisher intends to be misled by his conduct.

A scraper would have a hard time under the court's "fair use" findings - demonstrating, for example, that it serves "different and socially important purposes in offering access to copyrighted works" and "does not merely supersede the objectives of original creations", that it doesn't serve advertising on a cached page, that it uses no more of copyrighted works than necessary, that there is no demonstrable market value in the works it has copied, that site owners would not demand payment for scrapers' use of their works, or that they acted in good faith.

The safe harbor analysis is limited, because the trial court found that Field did not properly present his motion in relation to three of the four safe harbor provisions.


 4:07 pm on Jan 27, 2006 (gmt 0)

Kinhunter, the situation you describe in paragraph 1 is a misrepresentation of pre-1970's U.S. law. Today, material doesn't have to be labelled to be protected by copyright, even in the U.S. (This is a retrograde step in the direction of freedom of expression -- under today's law only very rich people with lawyers can KNOW whether they can legally exercise those rights, whereas ordinary folk are pretty well excluded. But that is the current international copyright treaty recognized by most civilized countries, as well as a number of others.)

And "leaving something in a public place" is not at all the same thing as "publishing". Arguably, placing something on a publicly available website IS tantamount to publishing -- and in this case the judge appeared to consider that the plaintiff's copyrights applied just as if he had published his website "any other way."

DRM has nothing, nothing at all to do with copyright issues. You can see that fact most readily by considering things like DVDs. Anyone can make copies of those without being able to USE either the original or the copy! DRM only restricts legal users who wish to use legal copies in their own way -- such as, for instance, play DVDs on their computer. And the infamous DeCSS did not help people copy DVDs -- it merely made it possible for people to play DVDs on Linux computers.

Again, DRM does not ever protect copyrights, and did not grow out of any kind of copyright notice or copyright protection scheme. DRM is nothing but a way of applying additional unconscionable restrictions that are legally unsupported and unsupportable, on people who have a legal right to use the legal copy they have in their legal possession.

As for what actions you have to take to protect your copyrights, I suspect you're confusing copyright law with trademark law (which DOES have a clause much like what you describe.) In fact, the DMCA does not specify any fixed deadline for a takedown notice after you become aware of infringement -- the deadlines to reply or remove, apply to the alleged infringer! And, as a practical issue, you may, for instance, allow Google or the Wayback Machine to cache your website for many years, then suddenly change your mind and for your site to be removed -- and they will comply.


 4:21 pm on Jan 27, 2006 (gmt 0)

The concept of "Fair use" has always been subject to interpretation. In fact, the law was obviously written as vaguely as it was to allow for interpretation.

In the U.S., the law lists "four factors to be considered in determining whether or not a particular use is fair": the purpose or character of the use (including whether such use is of commercial nature or is for nonprofit educational purposes), the nature of the work, the "amount and substantiality" ofthe portion used in proportion to the work as a whole, and the effect of the use upon the potential market for or value of the copyrighted work.

That's why it doesn't make a lot of sense to post statments such as:

What this does, is effectively neuters all copyright laws on the internet today. It is the wild-wild west again. It legalizes content theft.

Every "fair use" situation is different, and even if the U.S. Supreme Court were to rule tomorrow that Google's caching scheme represents "fair use," that wouldn't mean anyone could legally copy and republish Web pages at will.


 4:41 pm on Jan 27, 2006 (gmt 0)

Does anyone know where Google stands if they argue that they make a considered applied decision, as to the position of a particular website withing their serps, based on historical data (i.e. yesterday's webpage).

If Google are to defend this decision to rate a particular site in a particular way, couldn't they then have a case to retain a copy of the historical document and present it when quizzed as to their serp ruling? As perhaps Matt Cutts quoted particular spam techniques recently that led to penalties being applied to a complaining website owner.


 5:00 pm on Jan 27, 2006 (gmt 0)

Some people are confused.

1) Don't call it a cache, call it an archive. A cache is a temporary data store intended to speed up processes that frequently need to retrieve (and sometimes save) the same data.
2) It is not the storage of copied data that is being disputed, it is the republishing of that data. In order for a search engine to operate properly, it needs to store copies of the pages it indexes.



 5:29 pm on Jan 27, 2006 (gmt 0)

Brett, it's really not fair to call the section above the cached page an "advertisement". It actually contains helpful information necessary for people to understand and interpret the page they are viewing.

It would be considerably more confusing and misleading if Google displayed nothing about the page.


 5:51 pm on Jan 27, 2006 (gmt 0)

Don't call it a cache, call it an archive.

Google calls it a cache. Since this thread is about the "Google 'Cache'," it's only reasonable to use Google's term.

What the search engine do, is not caching. They are republishing the original work with their own advertisement at the top and in the address bar.

I agree with jomaxx: Describing Google's top frame as an "advertisement" is a stretch worthy of Spiderman. Now, if you were talking about About.com's ad frame, I might agree with you.


 6:04 pm on Jan 27, 2006 (gmt 0)

1) The rules for an ISP (pear, orange, whatever) are not relevant here. This is not an ISP case, and no ISP is involved.

The ISP is protected from abuses perpetrated by their users.

The search engine entity is a user. Even arguing that Google is an ISP (pear, orange, whatever) doesn't change their position, responsibility or liability in this case.

They were charged and evaluated as a user, and they won this phase as a user.

2) Scrapers are users, too.

And while adding a branding element certainly meets the criteria for an advertisement, it is not in any way a form of anti-competitive behaviour when used as it is by Google. It would be anti-competitive if content were 'cached' by a scraper and used to trigger affiliate ads.

Scrapers and search engines are different types of user entities.

3) Objections to the concept of 'caching' (or 'storing for use in a search engine result algorithm') are objections to the very technology that makes this unique form of research tool viable. Without any caching, every search would involve a massive crawl and every page request would place an un-necessary drain on any server's resources. Everything would take a lot longer to accomplish, and 'joe schmoe' would not be online, driving this engine.

If caching is outlawed ... only outlaws will have caches. And that does not bode well for the future of the web. Narrow decisions like the one being considered by this thread are GOOD for the Internet. They help us define what is acceptable and what is not.

Google = OK
Scrapers = Still Bad


 7:13 pm on Jan 27, 2006 (gmt 0)

Google calls it a cache. Since this thread is about the "Google 'Cache'," it's only reasonable to use Google's term

Google's term for this feature is wrong. It may well have been chosen some years ago in the hope of confusing people and making it sound more legitimate - boy were they right to choose that term or what?

The reason why I have made this point clear is that some people seem to think that the Google archive is no different to the cache on their ISP's server even their own computer. Clearly, the reason they believe this is because of the name Google have chosen to use for this feature.

It's also worth pointing out that the robots meta tag to disable this feature is noarchive.



 7:15 pm on Jan 27, 2006 (gmt 0)


"Describing Google's top frame as an "advertisement" is a stretch worthy of Spiderman."

I so disagree with you and totally agree with BRETT!

It is and advertisement. Even if the logo does not sell anything specific...it is an advertisement.

A coke can displayed in a popular TV show is an advertisement.

I could pay to have my logo across many websites. It may not sell anything but still it is an advertisement.

I produce articles with by-lines that have my company name. Those are advertisements.

People linking to my site are advertisements.

I just saying my name is an advertisement.

Anything you do to describe, praise, or give public notice to is in fact an advertisement. It does not have to sell anything. It does not have to be paid for.


 7:29 pm on Jan 27, 2006 (gmt 0)

"It's also worth pointing out that the robots meta tag to disable this feature is noarchive."

Yeah, it is also worth pointing out that the feature should be disabled AUTOMATICALLY due to a bigger code called a COPYRIGHT. A copyright automatically implies that the creater/owner has the sole right to the disrtibution/presentation of their material REGARDLESS if distribution information is present.

Somewhere people/entities have created this BACKWARDS thinking. Something like this:

"IF it is in public view that means I can do what I want on MY terms unless the owner opts out using my special code/request."

Hey stupid! I the creator of my material MAKE THE RULES on how I want my stuff distributed. If no rules are presented then the rules of COPYRIGHT are implied. Why is this so hard to understand?

Maybe they get away with it because they have better laywers. Maybe they get away with it because the majority website owners just don't care or care to fight it.

Maybe these people/entities score points in the public eye (or judges) for having "good" intentions. Maybe the score points because they don't seek direct monitary gain. You know what?! IT does not matter what their intention are or if the seek monitary gain. It isn't their stuff unless deemed otherwise by fair use or permission.

Also one other note:

In cached version on Google,Yahoo,MSN...Certain advertisements do not show up or are mistargeted (adsense). People can read my material without my opportunity for financial gain through those ad placements.

[edited by: arubicus at 7:42 pm (utc) on Jan. 27, 2006]

This 189 message thread spans 7 pages: < < 189 ( 1 2 3 [4] 5 6 7 > >
Global Options:
 top home search open messages active posts  

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved