homepage Welcome to WebmasterWorld Guest from 107.20.37.62
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Visit PubCon.com
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 66 message thread spans 3 pages: 66 ( [1] 2 3 > >     
Phrase Based Multiple Indexing and Keyword Co-Occurrence
Is this all themes with a new suit of clothes on?
Marcia




msg:3336437
 11:39 pm on May 10, 2007 (gmt 0)

Using a timeline as a starting point, in November, 2003 there was a quiet introduction of Google's use of stemming, whereas before they had stated that they didn't use it. That was also the month of the Florida Debacle, the update that shook up the SERPs, and talk about LSI/Latent Semantic Indexing started. In short, according to what I've read LSI isn't feasible for a large scale search engine because first off, as such it's a patented technology, and secondly it's very resource intensive.

LSI also uses single words - terms. However, 8 months later there was a series of patent applications filed by Google that dealt with Phrase Based Indexing. Phrases, not words. Not all those apps have been published yet, but 6 have been. Six, not five.

An information retrieval system uses phrases to index, retrieve, organize and describe documents. Phrases are identified that predict the presence of other phrases in documents. Documents are the indexed according to their included phrases. Related phrases and phrase extensions are also identified. Phrases in a query are identified and used to retrieve and rank documents. Phrases are also used to cluster documents in the search results, create document descriptions, and eliminate duplicate documents from the search results, and from the index.

In logical sequence:

Phrases are identified:

Phrase identification in an information retrieval system [appft1.uspto.gov]
Application filed: July, 2004
Published: January, 2006

Documents are indexed according to their included phrases:

Phrase Based Indexing in an Information Retrieval System [appft1.uspto.gov]
Application filed: July, 2004
Published: January, 2006

Users search to find sites relevant to what they're looking for:

Phrase-based searching in an information retrieval system [appft1.uspto.gov]
Application filed: July, 2004
Published: February, 2006

The system returns results based on phrases, including functions which include generating the document snippets:

Phrase-based generation of document descriptions [appft1.uspto.gov]
Application filed: July, 2004
Published: January, 2006

Use of a partitioned multiple index system to conserve resources and space:

Multiple index based information retrieval system [appft1.uspto.gov]
Application filed: January, 2005
Published: May, 2006

Phrases are used to detect spam documents:

Detecting spam documents in a phrase based information retrieval system [appft1.uspto.gov]
Application filed: June, 2006
Published: December, 2006

What's interesting about the publication date of that last one is that it attracted widespread attention, and it was only weeks later that an "unofficial" request was put out about reporting paid links. Matt is not only adorable, he's very smart and has an impeccable sense of timing. ;)

There's been plenty of discussion on that aspect of those patent apps, and I'm sure we'd all be delighted if someone wants to start another thread on the topic, including spam, recips, money keywords and Adwords conspiracy theories, but while there have been write-ups about the patents and recaps simplifying what's in the documents, I haven't seen in depth discussion about some of the IR principles that the system embodies. So I think we can all get a better grasp if we discuss those and try to get closer insight into the system.

Keywoord Co-Occurrence
For starters, there are repeated references throughout all those documents to the term co-occurrence. In fact, in just a few short one or two word paragraphs in one of them, the word was used ten times. That seems to be the underlying principle that makes the whole system tick.

What it basically means, at the simplest level, is words or phrases that appear together. The patents go into detail about what's looked for, and statistics on co-occurrence patterns are used to relate clusters of terms/phrases into coherent "themes" and make predictions based on those statistics.

Word Sense Disambiguation
There's always been a problem with contextual relevancy and grasping the intended meaning concerning what a searcher is looking for with words that have several different meanings, or what a document can mean. That's polysemy. Polysemic words are those that are the same but can have more than one meaning. Example: is bow referring to a hair bow, or a bow and arrow or a violin bow?

Really, the only way to be able to tell with ambiguous words would be to look at other words (or phrases) that co-occur - appear with - the word; or in the case of a phrase based system, phrases that co-occur enough across the whole corpus of documents enough to be able to discern the meaning or "theme" of a given page when it uses the word.

There's a difference between word sense disambiguation and word sense discrimination, and it's explained very well here (clear browser cache, it's a PPT presentation):

Powerpoint Demo on Word Sense Disambiguation [d.umn.edu]

The main difference is that one starts out with a pre-defined lexicon (like LSI). Also, I've got copies of the original Applied Semantics patent and white papers (there were 2), and it seems that was also lexicon based. With phrase-basd indexing, it seems that it starts out with a blank check and creates a taxonomy on the fly based on discovering co-occurrences in order to constuct the co-occurrence matrix.

So given that the terminology is used in profusion throughout, it's my feeling that we can benefit by discussing it among ourselves, as well as looking at how the "multiple index" system is set up. Those aspects might well clear up some of the mysteries for us.

Anyone game?

 

annej




msg:3336578
 3:53 am on May 11, 2007 (gmt 0)

starts out with a blank check and creates a taxonomy on the fly

It's probably just my limited way of thinking but for example in the phrased based patent on spam I had the impression that typical word/phrase patterns typically found in spam pages were already gathered. So even though the final application of the filter may be applied on the fly there is already the data on words and phrases that would be watched for.

Also it occurred to me that in the case of the spam aspect Google would not have to have data on all language to draw from as it only really needs to look at the potential phrases that would typify a spam document.

So in a way it is theming but it would be selective theming.

I'm just trying to wrap my brain around this concept so don't hesitate to correct me if I'm off base on this.

tedster




msg:3336589
 4:18 am on May 11, 2007 (gmt 0)

I love the comic relief in that PPT file - the examples of the trouble that word sense ambiguity can cause in real newspaper headlines:

Drunk Gets Nine Years In Violin Case
Farmer Bill Dies In House
Prostitutes Appeal To Pope
Stolen Painting Found By Tree
Red Tape Holds Up New Bridge
Deer Kill 300,000
Residents Can Drop Off Trees
Include Children When Baking Cookies
Miners Refuse To Work After Death

And after the chuckles stop, then I look at my web page copy and see if I am doing anything like this. I know from trying to have precise technical discussions, both here and with clients, that words can be very slippery.

I'm thinking that the much maligned computer translations that are available online could be a friend in this area. Run a troubled page through a couple translations: en > fr > en or whatever -- and notice what words get badly trashed in the final output

tedster




msg:3336594
 4:30 am on May 11, 2007 (gmt 0)

Hm... looking at those examples from the PPT that I posted above, it just struck me how often the ambiguity of the phrase turns on something other than a noun or verb. By force of keyword habit, nouns and then verbs are where I focus.

Marcia




msg:3336616
 4:50 am on May 11, 2007 (gmt 0)

for example in the phrased based patent on spam

Exactly. Everyones' been so focused on the spam/penalty aspect, that there can be 19k posts and we could still be going around in circles with "this is spam" and "no, it isn't" and never get to the bottom of what this whole thing is really all about and how it works.

That's why I started a whole new thread with the focus on co-occurrence, because it's mentioned so many times in so many contexts across all those apps that it's almost like they're telling exactly what they're doing and we have to trip over it with our eyes closed to miss it.

It's about a whole indexing system, and let's face it: they didn't put in a whole new infrastructure (Big Daddy) as a conspiracy to bilk more Adwords dollars out of webmasters by using Adwords data against them, or as a beautification project for the Plex in between remodeling the lunchroom and restrooms with new decor and fixtures. :)

In this one it says very specificially:
"Phrase identification in an information retrieval system"

[0090]After the last stage of the indexing process is completed, the good phrase list 208 will contain a large number of good phrases that have been discovered in the corpus. Each of these good phrases will predict at least one other phrase that is not a phrase extension of it. That is, each good phrase is used with sufficient frequency and independence to represent meaningful concepts or ideas expressed in the corpus. Unlike existing systems which use predetermined or hand selected phrases, the good phrase list reflects phrases that actual are being used in the corpus. Further, since the above process of crawling and indexing is repeated periodically as new documents are added to the document collection, the indexing system 110 automatically detects new phrases as they enter the lexicon.

So no, they're not looking for sites going after "money phrases" gleaned from Adwords data, they're generating the taxonomy of phrases very specifically by analyzing the data on pages they fetch by crawling, and creating the posting lists of possible and good phrases by using data on the phrases encountered and the co-occurrence statistics. They're very clear and very specific on that point and even give details. That's why they call their collection the co-occurrence matrix.

I think if we can put the spam part aside (which may be where collateral damage is accidentally happening) and put conspiracy theories aside, we can probably get to the bottom of at least a good part of what's going on, by looking at some details of what those papers are actually saying.

JoeSinkwitz




msg:3336624
 5:07 am on May 11, 2007 (gmt 0)

One quick way to meaure co-occurance and its prominence would be to simply scrape the first 2 pages of rankings, the 20th 2, the 40th 2, 60th 2, etc until you hit the end, taking the last 2 full pages of results.

Run some co-variance testing to see how the usage of text actually plays out in actual terms. Once you have the data, one should be able to feed each document into some publically available tools to see how everything is related to the chosen phrase. Each set of 20 docs should say something, especially the first 2 and the last 2 -- we did this in regards to localset inbounds and saw some interesting trends that looked like an inverted bell curve (meaning that the EOS results were as localset anchor text rich as their 1st page brethern).

The problem isn't just co-variance though, in the event that the same inverted bell curve shows up on the expected phrases; it is the disambiguation that you mentioned above. That is so much harder to determine, due to the near random types of other localset results returned that may be fudging with the expected thresholds and results.

Thanks for starting this thread Marcia.

Cygnus

Marcia




msg:3336663
 6:15 am on May 11, 2007 (gmt 0)

One quick way to meaure co-occurance and its prominence would be to simply scrape the first 2 pages of rankings, the 20th 2, the 40th 2, 60th 2, etc until you hit the end, taking the last 2 full pages of results.

Cygnus, that would show you first-order co-occurrence, but would there be enough data in that kind of limited set to be able to include how the results have been influenced by second-order co-occurrence?

This paper is about LSI (which is terms not phrases), but it's a concept I've had a problem grasping and it's about the clearest explanation I've seen on second,or high-order co-occurrence:

Analysis of the values in the LSI Term-Term Matrix [webpages.ursinus.edu]

Google has such a HUGE amount of data, billions of pages, and basically what they're looking for is the ability to predict other good phrases, and that would have to be based on a whole boatload of statistical data, including data on phrases that don't occur together, but each of which occurs with other phrases that a few hops back occur with each other.

annej




msg:3336664
 6:15 am on May 11, 2007 (gmt 0)

Marcia, You are right, I just mentioned the spam patent as that is the one where we can see the results. What indications can be seen in other applications?

JoeSinkwitz




msg:3337000
 2:00 pm on May 11, 2007 (gmt 0)

[Cygnus, that would show you first-order co-occurrence, but would there be enough data in that kind of limited set to be able to include how the results have been influenced by second-order co-occurrence?]

No, it probably wouldn't be enough to explain everything that we're seeing thusfar, but it'd probably help to develop a primer for one's niche. In terms of the second order, third order, etc...keep running it periodically, making note of new sites that pop into each section and how the co-occurence of the previous sites change -- that is obviously going to be a bit more tricky and I haven't really thought about ways to script it just yet. However, if you can build enough data that shows what the first order co-occurence is, and see how sites move into that EOS issue on the co-occurence changes, the changes themselves might be enough to hilight how the themes move (and possibly collide).

The amount of data required to do it right would be massive, but a hack and slash method of determining the moving targets of how the co-occurrence evolves for themes, using limited data, is like a using landmarks as a navigation system...good enough in most cases.

Hopefully someone else has a more elegant solution that what I'm deriving with my diet coke fueled mind.

Cygnus

annej




msg:3337760
 3:37 am on May 12, 2007 (gmt 0)

One concern that I have is that some very niche topics will be so unique that there won't be sufficient data to find the validating co-occurring phrases.

In other words if a page is about a common theme Google will have a good picture of what the typical co-occurring phrases would be but if there is an article about someting like a unique historical occurrence Google won't have the data necessary to evaluate the theme. In other words would theming be a problem in terms of getting out information that is new to the Internet?

But to go anywhere with this possibility we need to know how to look for theming beyond penalties. Is there a way to do this? Of so we have to go by what we see in penalties and extrapolate from there.

Robert Charlton




msg:3337774
 4:19 am on May 12, 2007 (gmt 0)

One concern that I have is that some very niche topics will be so unique that there won't be sufficient data to find the validating co-occurring phrases.

annej - I suspect that refinements to the basic algo, including semantic refinements (if they're being used), TrustRank, Local Rank, etc, don't kick in when the topic is very niche and there is only a small amount of data.

As the web grows larger and Google finds a level of competition sufficient to warrant more layers in the algorithm for previously sparse niches, then it's likely that the sample size will have also grown large enough to provide more dependable data.

I'm sure that Google evaluates margins of error in any given set of samples, though they may not always get things right. It may be that some of the cyclical serp variations we've been seeing is Google constantly refining the data... and perhaps deciding when to apply new algo layers to a given niche.

[edited by: Robert_Charlton at 4:28 am (utc) on May 12, 2007]

annej




msg:3337787
 4:33 am on May 12, 2007 (gmt 0)

Robert, That makes sense.

webastronaut




msg:3337802
 4:40 am on May 12, 2007 (gmt 0)

Great post! Instead of posting anymore nonsense myself, it's really time I read these patents more carefully and compare them to forum posts... I think Bigdaddy was alot bigger than what I thought was going down.

stever




msg:3337881
 6:34 am on May 12, 2007 (gmt 0)

Nice post, Marcia, and as usual not enough attention paid to it...

I started to write about authorities, ontologies and Florida, and their relationship to phrase-based indexing, but got too far off topic.

So I would just say that for anyone interested in this area and who has a certain command of another language, a good place to keep an eye on is in the travel area of a country which uses the language that you have. Keep an eye on the local language and English language results.

Why?

1) Travel uses certain popular phrases (Location + X) which are essentially the same over different languages
2) Changes in search engines are often rolled out first in English before moving on (if ever) to other languages

Marcia




msg:3338000
 11:28 am on May 12, 2007 (gmt 0)

stever, the response is nothing more or nothing less than expected, but nevertheless it's here for the type of folks who like to dig into this kinda stuff and who sleep with it under their pillows. ;)

authorities, ontologies and Florida

Good time for a couple of definitions:

Ontology:
Definition from Stanford:

http://www-ksl.stanford.edu/kst/what-is-an-ontology.html]What is an ontology? [www-ksl.stanford.edu]

Authorities:
It doesn't mean [n)K pages of "content" out-sourced off-shore and cranked out on a topic. The term was originated by and was defined by Jon Kleinberg in 1998. That's the guy who invented it. SEO's and webmasters may or may not agree with (or like) IR scientists' and search engine engineers' view on what the definition of an "authority" site really is, but this is the original definition that's been quoted as a reference in papers from the beginning up until today's current date:

Authoritative Sources in a Hyperlinked Environment [cs.cornell.edu]{PDF, loads slow)

Fascinating point brought up, stever. The main symptoms that were howled about during the initial stages of the Florida update affected local-oriented sites - like dog walkers New York. Those were phrases, weren't they?

Marcia




msg:3338899
 2:15 am on May 14, 2007 (gmt 0)

Incidentally, here's some interesting food for thought. Try doing a few searches and checking this one out:

In the other thread (the 950+ penalty thread) someone mentioned having a problem with the phrase "technical specifications sheet." Upon checking out the suspicions about that, this is what seems to be the case:

technical specifications - is not a problem
technical specifications sheet - is a problem

Think about it. ;)

JoeSinkwitz




msg:3338909
 2:39 am on May 14, 2007 (gmt 0)

technical specifications - is not a problem
technical specifications sheet - is a problem

Interesting. In terms of natural language, perhaps it simply doesn't understand where "sheet" fits in regards to technical specs. I put a couple combinations through the external adwords tool to see what happened:
technical specifications
technical sheet
specifications sheet
technical specifications sheet

Unfortunately, Google thinks these are different things; a collided theme issue perhaps. Aside from taking out the "sheet" keyword, or maybe adding some specific anchor text for that phrase in conjunction with adding some synonyms of the sheet keyword, I see little that can be done by the webmaster. Training the filters to grow themes appropriately seems like a whole different topic.

Ganceann




msg:3338991
 6:46 am on May 14, 2007 (gmt 0)

Sorry to bring it down to laymans terms, though there are some things that seem to create a paradox - the battle with relevance, SEO, spam and authoritive websites.

Relevance problems will always exist due to how language evolves and the meaning of words can (and does) change over time. Obviously not every word changes but additional meanings can be added for words - this co-occurance syndrome could improve relevance only if it takes the phrase in the correct context initially. Otherwise results will not be anywhere near relevance.

In this instance, this is likely why on some search phrases/words you may get 2-3 different groups of sections displaying on the first page - presenting the user with an option to improve the relevance and to return results with the correct context.

Spam - no sense mentioning it really, but as far as co-occurance is concerned, this could affect more legitimate websites than actual black hat ones - simply due to people trying to ensure they meet their keyword density objective for phrases within each page.

Authoritive sites and SEO are a paradox in themselves as well as going against the intentions of receiving 'natural organic results'. Both concepts will result in higher serps results than they would achieve through 'natural organic results' whereby no authoritive site exists.

Instead, any single page could be authoritive on a topic, not a site for broader topics. In this instance of authoritive pages, SEO would play greatly into determining the serp result. Again it may be an organic result, but not a natural organic result.

Overall, I can see how google have to maintain the integrity of their results and at the same time improve the appeal of adwords. They just need to achieve the balance where sufficient recognised sites appear for terms. Obviously, not all commercial sites can receive high positions naturally under any algorithm... many highly relevant non-commercial pages get buried due to sitewide ranking and are normally replaced with less relevant pages that are broadly relevant to some commercial sites due to their 'perceived authority' status.

tedster




msg:3339005
 7:13 am on May 14, 2007 (gmt 0)

I think that, in practice, there may well be a need for dictionary/thesaurus supplementation to properly account for idiomatic phrases -- especially newly emerging expressions.

Theoretically, it seems like a regular rebuilding of the predictive phrase tables should handle new language shifts just fine, at least for basic relevance scoring. But it seems to me that when penalties enter into the picture, a phrase-based co-occurence approach that only looks at character strings is going to have some potential troubles.

I appreciate that supplementing with dictionary look-ups is not all that elegant, but I can't see my way past it right now. But then again, I'm not looking at the actual implementation, just my mental picture of the theory - which is still evolving.

I would also feel more comfortable if these Google patents had fewer typos ;)

mattg3




msg:3339034
 8:27 am on May 14, 2007 (gmt 0)

Besides newspaper headlines, surely poems and lyrics should be hit by this? Not my subject, just interested.

And the more bloomy you write the worse you will do. It's like algorithmically throttling the soul out of the web ...

The seven of nine, Data, Spock update ... ;)

Maybe they should show some Star Trek episodes at them Google Tech Talks ...

Marcia




msg:3339054
 9:09 am on May 14, 2007 (gmt 0)

And the more bloomy you write the worse you will do

Sure, but will the bloomy lyrics or prose really represent a tangible topic? Using a word from the above example of technical specification sheets, what if we write a sentence on a page like this:

"Imagine a thick blanket of snow falling on the field, so that when you wake up in the morning it looks like the pasture has been covered by a freshly laundered white sheet."

We know what thick blankets are, what covers are (stemming cover, covers, covered or covering) and we know what white sheets are; but how about if it's written on a page about cross-country skiing that's selling equipment?

"Think about how it will feel to put on your Brand Name cross-country skis and have the brisk morning air caressing your face and stir your soul as you sweep across the field like a turbo-charged sailboat."

Where's the information gain for thick blanket or white sheet there? And yet we can envision that someone could have an "authority" ecom site with that page having a PR4, not be in the Supplemental Index for the phrases, and if they end up ranking #967 they could start screaming that they've been handed a 950+ penalty for thick blankets and white sheets because they're money phrases.

It's just a very colloquial use of language and there probably wouldn't be enough data to substantiate the page having genuine relevance for those phrases. I know it's VERY extreme, but some people do write like that.

inbound




msg:3339068
 10:12 am on May 14, 2007 (gmt 0)

For those that may not know, Google released lots of n-gram data last year:

[googleresearch.blogspot.com...]

Have fun with it...

julinho




msg:3339093
 11:43 am on May 14, 2007 (gmt 0)

With phrase-basd indexing, it seems that it starts out with a blank check and creates a taxonomy on the fly based on discovering co-occurrences in order to constuct the co-occurrence matrix

My (admitedly non-expert) thoughts:
I think that it would make sense if Google used their users to create and refine the taxonomy.

How do Google know that, for instance, the words Tokyo and Japan make a valuable co-occurrence, whereas Tokyo and Gabon do not? Because the former combination is searched thousands of times a day, and the latter is rarely searched. The users are telling Google which combination of words they find relevant.

At the very first line ofthis patent [appft1.uspto.gov], Google claim:
1. A computer implemented method of organizing a collection of documents by employing usage information, comprising: receiving a search query; identifying a plurality of documents responsive to the search query; assigning a score to each document based on at least the usage information; and organizing the documents based on the assigned scores.

Every time a single search is conducted, scores and rankings are changed.
Accumulate millions of daily searches across the world and over time; give all the data to the PHds, and they will probably can find a use for them.

henry0




msg:3339104
 12:01 pm on May 14, 2007 (gmt 0)

This sounds like (if understandable…) a guide to deliver a content
that fits G indexing patterns.

Are we back in omitting to deliver a site that aims at users and aiming at pleasing almighty G?

Also usually when building a new site most of the content is “web owner” supplied; will that existing content need to be rewritten?

Further in light of such a complexity this should lead to a new specialization in content development.

julinho




msg:3339106
 12:03 pm on May 14, 2007 (gmt 0)

So I would just say that for anyone interested in this area and who has a certain command of another language, a good place to keep an eye on is in the travel area of a country which uses the language that you have. Keep an eye on the local language and English language results.

That happens to be my case.

I've seen clearly the effects that seasonality (people from different countries searching for different cities in different seasons of the year) plays on SERPs.

glengara




msg:3339111
 12:08 pm on May 14, 2007 (gmt 0)

In practical terms, should we not start seeing multi word search queries returning fewer pages where the words appear individually?

howiejs




msg:3339126
 12:52 pm on May 14, 2007 (gmt 0)

anyone actually using this:
[googleresearch.blogspot.com...]

n-gram dump

I have heard people talk about it - but is anyone doing anything with it?

justageek




msg:3339168
 1:47 pm on May 14, 2007 (gmt 0)

I have heard people talk about it - but is anyone doing anything with it?

Nope. It's outdated, incomplete and a bit expensive.

In 1999 I started using phrase based indexing for a POC product I was working on. What I did then, and now just for fun, was just use the major search engines to get the information to play with.

All you have to do is use your favorite scripting language and scrape the SERP for the phrase you are looking for and do what you wish with it. If you look at the first listing on the 3-gram data you have to buy it says:

ceramics collectables collectibles - 55
ceramics collectables fine - 130

But if you look at the current SERPS you get:

Google:
ceramics collectables collectibles - 129
ceramics collectables fine - 111

MSN:
ceramics collectables collectibles - 23
ceramics collectables fine - 537

Yahoo!:
ceramics collectables collectibles - 75
ceramics collectables fine - 124

Just going by the most recent Google data it says that the order between the phrases has changed compared to what you would buy on the dvds. By looking at the other engines it says that 'ceramics collectables fine' can be found almost 4x as much as what Google currently says is out there and almost 10x what the old data says is out there!

So, use scripts against current data instead of the old data you have to buy. One thing to watch out for though is sometime you don't get a SERP if you hit the engine to fast or to often. Your IP may get banned so just switch to a new IP and continue on.

JAG

Marcia




msg:3339172
 1:57 pm on May 14, 2007 (gmt 0)

How do you expand the phrases and get the co-occurrence data for those individual terms?

europeforvisitors




msg:3339173
 1:59 pm on May 14, 2007 (gmt 0)

This sounds like (if understandable…) a guide to deliver a content that fits G indexing patterns. Are we back in omitting to deliver a site that aims at users and aiming at pleasing almighty G?

Maybe Google's objective is to make the cost of optimizing for Google higher than the cost of simply writing or buying good, useful organic content (especially when "optimized content" is being optimized to hit a moving target).

This 66 message thread spans 3 pages: 66 ( [1] 2 3 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved