Welcome to WebmasterWorld Guest from 54.158.214.111

Forum Moderators: goodroi

Message Too Old, No Replies

Disallow pdf

     
10:00 am on Sep 9, 2008 (gmt 0)

Full Member

10+ Year Member

joined:Aug 9, 2005
posts:240
votes: 0


We upload pdf's to our server and they are linked in the following way:

[download.ourwebsite.com...]

(download. is a subdomain and /pdf/ is a folder within it).

The pdf directory is closed, you just see a forbidden error. If you go on [download.ourwebsite.com...] it redirects you to our main site.

For some reason google has picked up one of our customer's pdf files, the only explanation I can think of (as these links are only shared through email) is that the customer has posted the link to somewhere public on the web.

Is it possible in robots.txt to stop google from picking this up? If so where do I place robots.txt, in the root of the subdomain or within the /pdf/ folder? Also what do I put in robots.txt?

Thanks in advance :-)

1:05 pm on Sept 9, 2008 (gmt 0)

Administrator from US 

WebmasterWorld Administrator goodroi is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:June 21, 2004
posts:3317
votes: 243


hi Fire,

Google could have found that URL if someone posted it online or if someone visited and had the google toolbar installed. It is hard to prevent Google from knowing about any URL. Knowing about a URL is different from actually accessing it.

To prevent Google from accessing the pdf you should upload a robots.txt file to the root of your subdomain. You can place a wildcard to prevent Google from accessing any pdf file or you use the standard folder exclusion.

Whichever you choose make sure to validate it so you know it is doing the right thing. To be extra safe you should monitor it a few times for the first 2-3 weeks just to everything is how you want it.

1:29 pm on Sept 9, 2008 (gmt 0)

Senior Member

WebmasterWorld Senior Member 5+ Year Member

joined:May 31, 2008
posts:661
votes: 0


Is it possible in robots.txt to stop google from picking this up? If so where do I place robots.txt, in the root of the subdomain or within the /pdf/ folder? Also what do I put in robots.txt?

put the robots.txt in the root of the subdomain (it won't be read in a subdirectory)
and put
User-agent: *
Disallow: /pdf/

in there.

3:00 am on Sept 23, 2008 (gmt 0)

New User

5+ Year Member

joined:Aug 4, 2008
posts:2
votes: 0


You can also use the x-robots tag to disallow both the crawl and indexing of these documents. This tag is put into the http header. You can find info at NoArchive.net

You are also going to want to ask that the URLs are taken out of the index.

2:31 am on Dec 6, 2008 (gmt 0)

Senior Member

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month

joined:Mar 7, 2003
posts: 1079
votes: 9


ok, this helps me part of the way --

how does one ask to have many files removed from the index?

I know I can do this through Webmaster Tools - but are there others way to do it other than naming all the discreet files?

12:02 am on Dec 7, 2008 (gmt 0)

Senior Member from US 

WebmasterWorld Senior Member 10+ Year Member

joined:Nov 11, 2007
posts:774
votes: 3


Through Google's Webmaster Tools you can remove individual URLs, pages in a directory and all of it's subdirectories, or an entire site.
4:29 am on Dec 21, 2008 (gmt 0)

Senior Member

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month

joined:Mar 7, 2003
posts:1079
votes: 9


ok, so I remove the pdf files using robots.txt and or GWMT. (current tests show this is not yet working and it is a couple of weeks along - but hey, I can wait.)

Lets say there are dozens, maybe hundreds of these PDF files that are linked to from other sites.

Does disallowing PDF files from being spidered disable the off-site links from passing link juice to my site?

9:25 am on Dec 21, 2008 (gmt 0)

Senior Member

WebmasterWorld Senior Member 10+ Year Member

joined:Mar 10, 2006
posts:665
votes: 0


I've been using

 User-agent: *

 Disallow: /*.PDF$ 

 User-agent: Googlebot

 Disallow: /*.PDF$ 

Which is supposed to work. Unfortunately Google and Yahoo appear to be ignoring this, which is a shame. MSN obeys it though.

9:39 am on Dec 21, 2008 (gmt 0)

Senior Member

WebmasterWorld Senior Member 10+ Year Member

joined:Apr 30, 2007
posts:1394
votes: 0


Unfortunately Google and Yahoo appear to be ignoring this, which is a shame. MSN obeys it though.

The robots.txt is more of a "guidelines" thingy. If you want to properly block them you should setup a script that sends the pdf to the client instead of allowing a direct download. And in it check whatever is necessary (eg: session, headers, customer permissions)
9:50 am on Dec 21, 2008 (gmt 0)

Senior Member

WebmasterWorld Senior Member 10+ Year Member

joined:Mar 10, 2006
posts:665
votes: 0


Well, in my example, the rule is intended to disallow the main three search engines from indexing PDF files. Which 'should' also help the opening poster.

robots.txt may indeed be guidelines related, but it is a guideline that Google, at least, claims to obey.

12:20 pm on Dec 21, 2008 (gmt 0)

Administrator

WebmasterWorld Administrator phranque is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Aug 10, 2004
posts:11080
votes: 106


@bb
could it be a file name case issue?

in general you should have the specific bots disallowed before the wildcard user agent is specified.
in your case those disallows appear redundant.

5:02 pm on Dec 21, 2008 (gmt 0)

Senior Member

WebmasterWorld Senior Member jimbeetle is a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month

joined:Oct 26, 2002
posts:3295
votes: 9


Google and Yahoo appear to be ignoring this...

If your actual robots.txt is written as you did so above they well might. Blank lines in robots.txt indicate "end of record," so some bots might choke on the blank line between the user agent and the directive. All you should need is...

User-agent: *
Disallow: /*.PDF$

...with a blank line following it. But keep in mind that many bots don't follow the non-standard "standards" that the majors have implemented, so your PDFs might get into the wild anyway.

11:34 am on Dec 22, 2008 (gmt 0)

Senior Member

WebmasterWorld Senior Member 10+ Year Member

joined:Mar 10, 2006
posts:665
votes: 0


could it be a file name case issue?

Do you mean .PDF or .pdf? I didn't know this was an issue. Otherwise, I don't understand.

If your actual robots.txt is written as you did so above they well might

No, that's just my BB code formatting going awry. ;) The full robots file is as follows.

User-agent: *
Disallow: /*.PDF$
Disallow: /*.DOC$

User-agent: Googlebot
Disallow: /*.PDF$
Disallow: /*.DOC$

keep in mind that many bots don't follow the non-standard "standards" that the majors have implemented

I'm only concerned with Google, and as I understand it the above disallows should be obeyed by Googlebot.

 

Join The Conversation

Moderators and Top Contributors

Hot Threads This Week

Featured Threads

Free SEO Tools

Hire Expert Members