Welcome to WebmasterWorld Guest from 220.127.116.11
Does anyone know of a search command to determine how many pages Google lists only the url?
Then do this: site:www.mydomain.com "special term"
and this: site:www.mydomain.com -"special term"
The first will show you all the properly-indexed pages.
The second will show you the URL-only pages.
The nice thing about the "site:" command is that Google is not in a position to sabotage it the way they sabotaged the "link:" command. That's because there are thousands of sites out there that have placed a Google search box on their page, and this box has a "search this site only" option on it that utilizes the "site:" command. If Google tried to sabotage the "site:" command, all of these webmasters would start screaming that Google isn't covering their sites very well.
I have also observed this problem for many months. I read somewhere here that if Google hasn't indexed the pages for a specified period of time, no title will result.
That seems reasonable, since I get daily visits from Googlebot, but have not seen a deep crawl since the summar. Googlebot previously deep crawled my site monthly, getting almost all of the 17,000 pages, but now only a few hundred a week. All of the pages show in the index, but only around 800 have titles.
Don't have any of the problems with the site mentioned here. I wish I knew what to do about it.
Based on these results, I would say that one of the reasons of URL-only indexed pages is due to some forms of dublication.
Not sure it's not related to dup content either (I do not have dup content but a copyright notice on each page and it seems GG is picking up on that as dup conent?)
Some of my sites show URL only, some show title with no description and no cache.
Some related threads for you guys:
Then I removed the noarchive tag from all the pages and once Google cached them all, the problem went away. My theory is that if Google attempts to read a page but can't load it for whatever reason, and the page has no cache information, the current snippet/title are lost for whatever reason.
But that's just my wild theory.
How do you offer 35,000 products for sale without having templates for the pages? The main category and subcategory pages are still ranking well, except for a few that may have the same problem as the pages that are produced by the database.
These are all static html pages. In the past all of the pages were indexed showed descriptions and ranked well.
What bothers me is that many of my competitors use the same structure and a few now have identical pages showing, one as a supplemental with the only difference that one has a subdomain with a capital letter. These are exactly the same pages, html and content. Yet they rank # 3 and #4 in search results.
Yes it is also showing up for the site:domain thing and the page description is taken from our homepage even though its only a link to us. It is supplemental though.
Been sunk in depths of Google since March 2004.
Interesting two of my sites went this way as well in March 2004 - cache still showing as March 2004 - anyone know any filters etc that were applied at this time.
If I search google on site:www.mydomain.com - "some specific info from my site"
all I see is:
and more of the same
so does this mean these sites are stealing my sites rankings?
the google index the upove url without involve the sub-url(downloadurl),like that:
I feel this is what is happening:
1) If you site does not have enough authority (ie. links) then the GG bot may decide not to index your site regularly. That gives the effect that after a while your pages in the index "grow old" and therefore get listed by the TITLE only (TITLE still showing and URL as well, without description) -> solution: add more links, and possible add to GG via add URL (if site has been indexed previously)
2) Another situation is where the url only is showing, no TITLE and no DESCRIPTION. This is when "similar content" has been found, you will often see this on very similar pages -> solution: remove the dup. content (and look at forum posts relating to www. / not www. issue)
Also make sure that your robot files etc. are in order and that your site is spider friendly at all times.
The reason I mostly came to this conclusion is since I have a directory site that when I started it had the URL only (ie. not enough links), then when adding links more and more of the normal content pages (ie. directory cat's) became fully visible (title, desc., url) but the pages that are similar (the "add url" page for every cat that is very similar) is still showing as url only.
So in any case, it is always good to add links and to get rid of similar pages. It is my belief however, that with enough authority links it does not matter if you have similar pages or not as your authority has been established. Obviously not a lot of authority sites will link to you if you have dup. pages...
All the successful one-domain large businesses on the internet do not rely on google alone.. i.e. ebay, amazon, microsoft.. They all rely on their offline promotion, TV promotion, User recommendation, etc. People like us who do rely on google alone will start have to think outside the box BIG TIME. There is no way I can continue my inconsistent business on the web through google (unless of course I continue to manage multiple domains, which has proved successful.. but this is harder work than working at Wal-Mart for 5 bucks an hour, methinks).
If you for example look at all the porn sites and crack sites out there : most (all) of them are not 1-domain websites. They are promoted by hundreds of domains and they separate their content out into 100's of domains. They break all sorts of rules on google and they are the most successful on google (crack sites come up higher than Microsoft many times). Does that tell you anything? So if one domain goes bonkers on google, or heak if 30 domains go bonkers on google.. it' don't matter. Because they still have all that reliability of multiple domains.
Check out this person's quote.. it explains exactly what problem I described above, about thinking outside the box and not just utilizing google.
"Between May and September we had a lot of success with a new site. The pages were slowly taken and cached. A comprehensive site map helped out. Visitors were doubling month on month and sales were high.
However, since September the reverse has become true. Pages steadily fell out of the index, our position sliped and customers vanished. We had changed nothing and were totally white hat in everything we did.
It seems that somtime around September our site map stopped being cached fully (too big?) and the spider was too lazy to follow other links outside the site map.
We changed the site map and restructured our site to no avail as the bot just does not seem to want to know. It randomly visits a handfull of pages and randomly caches a fraction of those.
This so frustrating because we are a start-up, google is our sole marketing tool and source of income. Sales are virtually nil now and it seems that all we can do is wait. What are the rules? What changed?"
There is no way a business should just rely on google.. I've recognized this but haven't taken enough action.. Can't put all your eggs in one basket in this situation (Google worse than Microsoft: yet google uses Linux. What a clash this is).
I know why people still use TV and offline marketing, but there have to be other online solutions in addition to google. No wonder there are so many dot.com roller coasters.. I'd say one of the main reasons is because people like my self and the above fellow just focus on google marketing and pretend that "waiting 3 months" is a viable solution. Google marketing is the only "light we see" and boy do we have to start thinking outside of google.. or OUTSIDE the glass box that google holds us in.