Welcome to WebmasterWorld Guest from 22.214.171.124
Forum Moderators: open
The database forms part of a unique java driven historical translator for widget collectors which gives the date of design registration and the maker of all Victorian widgets.
The database consists of 1300 entries each with the Victorian codes for day, month,year and the name of the maker and the town. The user simply enters in the four different alphanumeric codes found within the registration mark on the antique widget and if valid the answer is given as to when it was registered and who made it.
It has inbound links from quite a few (clean) sites plus major ones such as a government's patents office and a National library.
Would the data within this database now have become a target of the hidden text algo or is there some other possible reason for this one page dropping to PR0?
I fully agree that it would probably be great for response etc. and would probably be worth implementing at some time BUT the point is the PR0 it now has - is it a penalty?
I've just checked and the page is now PR0 on six of the datacentres with only EX and IN remaining at PR5. The sole point of the page is the translator and an inline database should not receive any penalties as it contains the unique content for the translator.
All these things can contribute to a 0 pr which doesn't mean it is 0, it may be just less than 1. Anyways, there may be a page penalty but without looking at the code there is no way of telling for sure.
Of that 2 are PR6, 1 is PR5, 7 are PR4, 1 is PR3, 4 are PR2, 2 are PR0s. 1 is gray.
The PR0s are not reciprocal links (there is no link anywhere to their sites on mine) and they seem to be large links pages. The gray is an eBay 'ME' page.
My site's pages have all been at PR5 for the past six months. Is it likely that the two unsolicited links to my page from the PR0's are causing a penalty? If so how on earth can this be prevented?
A typical entry in the database would be:
It's convenient to keep it in the HTML because once the page is loaded collectors can do multiple wildcard searches without any file access. It's worked fine for 6 months with good feedback from the users regarding it's ease of use and speed in giving a result.
I'm sorry but I'm still not understanding why Googlebot should have a problem with this as it's all within the <head></head> tags and I understood Google disregarded anything within the script tags.
If Google in fact is picking this data array up as spam then surely a lot of similar translators/calculators that use the inline data method will also suffer?
Or is some factor affecting this page's PR adversely?