|Google never forgets a page|
| 11:11 am on Jan 14, 2014 (gmt 0)|
I just realized google never forgets a page. Let's say I remove my website today and delete my website and then reinstall it 10 years from now in a matters of hours all my links will reappear and my ranking will also reappear…
This is good and bad I think because what happens if you have an issue one day with your website ?
Does google never forget about your duplicate content pages for example even though you fix this issue with rel-canonical or meta-noindex or 410 ?
Or does it store those pages in some index that doesn't count for the ranking ?
| 3:35 pm on Jan 14, 2014 (gmt 0)|
|Let's say I remove my website today and delete my website and then reinstall it 10 years from now in a matters of hours all my links will reappear and my ranking will also reappear… |
I doubt that the new site in a matter of hours would take over the same search rankings as the old site had. For one thing, most of the old backlinks would no longer exist. Also, the competition would have changed during those 10 years. Also, Google's ranking algorithm would have changed.
But I do wonder about one thing: namely, if you completely delete a site, could Google's archive of its content have an effect on the rankings of other sites that are still live on the web?
I recall one case in particular where someone said that they intended to delete a penalized site and then re-create it on a new domain, but without any re-directs since the old site would no longer exist. In other words, the new site would have the same structure and content as the deleted site, but it would be on a different domain. The question was whether the new site would be given a totally fresh start with an unblemished record, or whether its arnkings would instead be affected in some way by Google's archive of the old site. I'm wondering if anyone has ever tried this, and if so, what was the result.
| 4:06 pm on Jan 14, 2014 (gmt 0)|
I did a test over 2 years ago and it had the same exact ranking with the same exact links ( 10 years down the road I can't tell but let's not forget that google wealth is its index so why would they forget about pages ;-)
Concerning the change in algorithm I agree with you that google adds news things here and there that they count or don't count in their ranking but let's not forget that the base has been the same for over 15 years and will be still be there in 15 (for sure they will be improvement to give better results to the users but the base of the algorithm is there, it is patent and cannot be changed)
The last thing you mentioned regarding a new domain name with the content of the old website is interesting. I would also like to hear the experience of someone out there, it would be interesting to know the results.
| 7:55 pm on Jan 14, 2014 (gmt 0)|
Search engines must have a way of comparing content. A couple of years back, wmt briefly said that such-and-such site linked to several of my pages "via this intermediate link". Obviously nothing of the sort was going on. (I checked to make sure, but it really was obvious.) The only way it could have happened is that the same (public-domain) content existed on my site and theirs, and the computer got confused about who points to whom.
I'm currently experiencing a more amusing version of the same thing, where wmt claims that scores of pages on my old site link to pages on my new site, and vice versa. (That is: not the identical pages, but they're claiming to see many, many links in both directions.) Again, the only way this can happen is if the content and the URL are stored in different parts of the computer. So the computer reads it as "page that says ABC links to page that says DEF" rather than "URL 123 links to URL 456".
Edit: I just went by gwt to check. I'd expect it to be leveling off by now. Instead, they're reporting far more links than there are pages on the site!
| 5:25 am on Jan 16, 2014 (gmt 0)|
It is not that it forgets or remembers. Googlebot is a robo that does all the crawling and information sending work and I would like to inform you that even the pages that are hidden, set as private or inner folders and directories are accessible to Googlebot. With the noindex or robots.txt tag we just inform the bots what to do and what not to but if you read about these tags then it is mentioned that it is upon the search engines to follow your request.
| 8:05 am on Jan 16, 2014 (gmt 0)|
A search engine has to crawl a page in order to see its noindex tag. But even google would get into trouble if it made a habit of crawling roboted-out pages.
| 5:39 pm on Jan 20, 2014 (gmt 0)|
I went through the page removal process, took months, used robots.txt and 301 as google recommended for permanent removal...
Got about 4,000 pages removed, or so I thought, which were really not pages at all to my way of thinking (reply to post #453, old permalink structure, or wap2, pdf...) but I figured they were gone from index.
So the other day I thought I'd remove all the do nots and 301's and here they come again... same old pages, same old growing 404 list...
If you guys don't want to discuss this, cleaning up my google index and all the 404 reports, please direct me to the latest thread for this subject
Or, if anyone is sure, for certain, how 404's effect a site I'd like to hear what you have to say... Thanks