Welcome to WebmasterWorld Guest from 54.227.14.23

Forum Moderators: Robert Charlton & aakk9999 & andy langton & goodroi

Message Too Old, No Replies

Removing URLs from Google Search Results

     
3:32 pm on Aug 18, 2013 (gmt 0)

New User

joined:Aug 18, 2013
posts: 2
votes: 0


Hello guy's I need help regarding the removal of urls from google search. I run a blog called as www.example.com where in google search result I am getting some duplicate results like "site:www.example.com/tag" <snip>. So I have tried all methods like Google removal request and Robots.txt but still facing the same issue please help me with the same.

[edited by: aakk9999 at 4:17 pm (utc) on Aug 18, 2013]
[edit reason] ToS, examplified domain name [/edit]

4:31 pm on Aug 18, 2013 (gmt 0)

Moderator This Forum from GB 

WebmasterWorld Administrator 5+ Year Member Top Contributors Of The Month

joined:Apr 30, 2008
posts:2619
votes: 186


Welcome to WebmasterWorld!

From reading your post I presume you have added:

Disallow: /tag

to your robots.txt and then you have gone to Google Webmaster Tools and you have removed folder /tag using Remove URLs feature.

How long ago has this been done? I am asking as it can take some time for Google to process your request. Also note that your removal request will be active for only 90 days and if you want to keep these URLs out of Google index, you will need to repeat the request periodically.

The alternative to the above would be to add meta robots noindex on the /tag pages. In this case you need to ALLOW Google to crawl /tag URLs so that Google can see the noindex directive (i.e. if you have disallowed crawling of /tag URLs, you will need to remove Disallow: /tag from your robots.txt)

In this case too it may take some time for Google to process these and remove these URLs from its index because Google will need to request each individual /tag URL in order to see noindex directive.
6:58 pm on Aug 18, 2013 (gmt 0)

Senior Member

WebmasterWorld Senior Member Top Contributors Of The Month

joined:July 19, 2013
posts:1097
votes: 0


The alternative to the above would be to add meta robots noindex on the /tag pages. In this case you need to ALLOW Google to crawl /tag URLs so that Google can see the noindex directive (i.e. if you have disallowed crawling of /tag URLs, you will need to remove Disallow: /tag from your robots.txt)

This is the best way I know of, because if a URL is disallowed in robots.txt and there are links to the URL Google is known to index the location based on the information in the links and surrounding text.

Another way of accomplishing noindex for URLs if it's impractical to put a noindex on the page itself is to set an X-Robots-Tag header with noindex as the content via .htaccess or httpd.conf.
12:12 am on Aug 19, 2013 (gmt 0)

Administrator

WebmasterWorld Administrator phranque is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Aug 10, 2004
posts:10563
votes: 16


welcome to WebmasterWorld, techcloud7!


the noindex solution is the best method as it's a long-term solution and works across all search engines.
in order for this to work you must allow crawling of the /tag urls.

one thing that isn't clear from your post is if your urls literally contain "/tag/" like www.example.com/tag/tag-name or if your mean the generic "/tag" urls like www.example.com/tag-name.
how many tags are there and do you want all tags noindexed or just some of them?