Forum Moderators: open
Help please
suggestions?
All of sudden, google started scratching old IP.
___
Yes, I have seen that also. Three times!
It occurred just before ALL THREE of my sites disappeared. In each case the old ip's were hit within one week of the new sites dissapearence.
I'm convinced beyond a doubt that Google is somehow using whois records or spidering old out of date domain listing sites, and this is causing that situation.
Sure hope It dosen't happen to you. It is really frustrating.
I am almost paronoid about IP changes that i really need to make on a few sites.
This happened to 1 of my slow death site too.
I still have 2 pages in index without -www and with tag of supplemental results.
Trawler,
Yeah Google is 100% having problem with dns cache
Search for 301 in WW.
[webmasterworld.com...]
[webmasterworld.com...]
The strange thing was that some pages from a site had a description and others had none though they were more or less identitcal (e.g. if one page was for big blue gadgets and the other one for small blue gadgets, both with a description for blue gadgets then only one showed a description on Google). All similar pages were listed but most of them with no description. That is where the thought about the penalty comes from.
Has this been disproved or confirmed?
The reason I say this is because, will a company name listed in the title of every page and then again listed in the body of the pages be considered by the algo as a duplicate?
I think if the title is the same on all pages, it is definitely not a good thing. Another thing I have noticed is that putting the company name last seems to be worthwhile, thus putting the most relevant unique words first, then the more common word and then the duplicate words.
For example:
round big blue widgets - Company name
square big blue widgets - Company name
round small blue widgets - Company name
square small blue widgets - Company name
and so on.
The company name as title on all pages is certainly not a good idea but it will not give you a duplicate penalty.
What we found was that pages (on the same site) that were similar to each other (different product variations but same description for all products) got a near duplicate status. One of the pages was indexed fully and its Title and description is shown.
For all other pages with variations of that product no description is showing up.
All similar pages are listed if I use site:domain.com +"www.domain.com" with duplicate filter=0 but apart from one of them all are listed with no description.
Searching for it in any shape of form is doesn't exist, all backlinks are removed (This happened recently 1-2 weeks ago) but it still has a PR of 5
How can is have a PR if it not in the index?
Its a waiting game I'm afraid
FYI I have over 1000 good quality links, over 800 good content pages and at least 7 PR8 sites linking one way not reciprical.
Did anyone ever get a reply from Googleguy on this topic.
The last PR update or what ever its called, however showed no PR change. :(
It may be that the links are just now coming in and it may be the site isnt yet completely spidered. But why are the links changing yet yahoo.com shows no links change yet. This is getting weird. All pages that have been spidered and included in the index are showing a 0 Pr or white bar.
Seems the "Oxford English Dictionary" site [oed.com...] is also suffering the same fate, even with its PR of 9. Only shows 43 pages and almost all pages left only have the URL and Similiar Pages link.
The robots.txt of [oed.com...] looks like below, is it the reason why there are only 43 pages indexed.
User-agent: *
Disallow: /accesslogs/
Disallow: /ads/
Disallow: /apps/
Disallow: /backtocs/
Disallow: /browse/
Disallow: /cgi/
Disallow: /conf/
Disallow: /content/
Disallow: /content
Disallow: /feature/
Disallow: /future/
Disallow: /guides/
Disallow: /home/
Disallow: /help/
Disallow: /icons/
Disallow: /include/
Disallow: /math/
Disallow: /subscribe/
Disallow: /news/
Disallow: /about/
Disallow: /services/
Disallow: /framesok/
Disallow: /general/
Disallow: /pdfs/
Disallow: /readers/
Disallow: /tour/
Disallow: /archive/
Their robots.txt was pointed out to me this morning.
There is however a MAJOR problem.
Where is their homepage is the serps? This is not banned by the robots.txt. Also only 4 pages show a cache link in the serps and they are ALL "Page not Found"
Come on Google, I may not know much about this crazy search engine world but I can see this is not right.
Wait, just looked at their homepage
<meta name="robots" content="noarchive">
Maybe they have just decided to ban all robots, why? With this above why is google still looking at some of their pages?
The pages are vanishes are we speak, now showing only 34, and once again "No Homepage"
Check again the results for your search.
site:oed.com -qwerrew is now showing 1470
Now seeing "Supplemental Result" on some of the DC's.
site:www.oed.com -qwerrew is now showing 230
Can I really be the only one who can see this.