A URL-only listing occurs when Google is aware of a URL, but has not (recently) fetched it. Now that Google have supplemental listings that last for a year or more, we can read "not recently" as "not for a rather long time".
So, it it possible to get a URL-only listing (usually after the URL had a supplemental listing) because the links to it have been removed. It is also possible to get a URL-only listing because the URL is excluded by /robots.txt
For large sites, the usual reason is that Googlebot has crawled deep enough to find the URL from a linking page, but not deep enough to fetch the URL.
How long might it take for a url to come back from url-only or from going url-only and then a day or two later vanish completely and after fixing whatever was wrong?
In my case I've had pages fully listed for years and within 15 days 60% of pages went url only and now are vanishing completely. No pages were ever supplemental and the pages that still exist still rank ok.
The only pattern for pages going url-only for my are:
1) too much boiler plate on pages with only a few paragraphs of content
2) too much unnecessary internal linking to high-profit pages
sailorjwd, basically if a URL is unfetched then you can encourage Google to fetch it if you link to it from another URL that Google fetches often.
Usually when a lot of URLs from a domain disappear from Google, it's because of a temporary hosting issue and they come back quite quickly.
If Google have decided not to list pages on your domain for quality reasons, then I don't think I could give you any kind of timescale.
My hosting company/website was down for about 18 hours on the 5th of May. But it seems way more pages are being affected than would have been spidered during that time period. Hopefully that is the cause and since this event has given me the nudge to cleanup inefficient html and remove some unneeded links maybe pages will come back getter than ever.
All of the pages on one of subcats which showed their URL only are now back to normal (tittle, description and URL) with the caches dated May 18 on some of datacenters.
Well, it looks like we'll have to wait and see if some of the URL listings revert to normal listings as a result of Bourbon. Some strange points I found going through the logs. Almost all of my url only listings have been crawled since 4/1. Interestingly, those spidered on 4/1 have normal listings while those spidered since then fall into three categories.
1. Normal listing - these are rare, typically one per sub-category.
2. URL only listings.
3. No listing in Google at all.?
Anywho, my fix is something that should help my situation regardlessof what bourbon does.
I can only speak for myself, I have a site with 600+ pages indexed. last week 30% went url only after the whole site was indexed and showing properly...I specifically kep track of each and every page, I then set up a filter to send me an email when each one of those pages were visited by G. sure enough each url only was visited by G and within 2 days I had fresh tags and full proper listing. This is the first time I have seen this, it almost appears that the url only listing with out supplimental is a Q for G to visit the page.
any thoughts on this?
I had five well established sites fall out of Google’s index and into the supplemental results back in January; these sites are now listed by URL only.
My sites were all affected at the same time. I attribute my situation to a domain name server problem with my host that prevented Google spidering my sites for over 2 weeks.
However what I don’t understand is that my sites have been in supplemental results for over 5 months. They get spidered only about one or two times each month now. Why is it taking Google so long to get them back into the index? Each site has over 1000 incoming links.
40 pages just came back from url-only or totally missing + new page created thursday all with May 20 cache dates..
maybe G was just toying with my mind :)
At least site is now totally blanc hat.
I specifically disallowed Googlebot from crawling dynamic pages, yet they show 20,000 of them as "URL only" listings. Makes for a very ugly site: command, but I don't think it affects much.
|URL-only listing occurs when Google is aware of a URL, but has not (recently) fetched it |
The endless repetition of this statement fools the unwary into thinking that it is true. Current reality is different.
17 of the first 20 SERPs for a site:mysite.com are url-only. Only 2 of the 17 have not been recently parsed by G, and the evidence from my logs is that a URL has to be parsed **3 TIMES** before it will get a title + snippet.
See [webmasterworld.com...] (msg60) for more detail.
Do any of your URL-only pages have only one word or phrase changing (for example a city name), with the remainder of the on-page text the same as on other URL-only pages?
|Do any of your URL-only pages have only one word or phrase changing |
Currently, 89% of the first 100 SERPS are URL-only. I can therefore confidently tell you that each page differs radically from each other. There are, of course, common elements on each page.
searchin site: will show all my pages without title&descrption but if I search allinurl: they are there!
I had a load of URL-only listings when the server went down and Googlebot obviously got blocked. They came back to normal as soon as G returned a few weeks later and crawled the whole site again.