Forum Moderators: open
Tip #1: Use If-Modified-Since (IMS). IMS lets your webserver tell Googlebot whether a page has changed since the last time the page was fetched. If the page hasn't changed, we can re-use the content from the last time we fetched that page. That in turn lets the bot download more pages and save bandwidth. I highly recommend that you check to see if your server is configured to support If-Modified-Since. It's an easy win for static pages, and sometimes even pages with parameters can benefit from IMS.
Tip #2: You can use wildcards in robots.txt, and patterns can end in '$' to indicate the end of a name. So if you don't want Googlebot to fetch any PDF files, for example, you could say
Disallow: /*.pdf$
Don't forget that in the robots.txt file, all url patterns need a "/" anchor to be valid. That's a pretty common webmaster error (maybe the most common robots.txt mistake), so keep it in mind and save yourself some angst. :)
Tip #3: Googlebot also permits an "Allow" directive in robots.txt. This lets you specifically flag areas that are okay to crawl. When there are two directives that could apply, we follow the longest (i.e. most specific directive). See
[google.com...]
for an example.
Tip #4: Avoid session ID's. If you can, use fewer dynamic parameters and stay away from the parameter "id=" in urls--Googlebot tries to stay away from things that might be session ID's.
Tip #5: Make sure that you can reach every page on your site with a text browser like lynx. That's the best way to make sure that a spider can follow links to all of your pages. Site maps can be a really good way to help users and spiders get down into different parts of your site.
Some of these tips work mainly with Googlebot, but I hope that they help. Anybody else with nuts and bolts tips for site architecture, crawling, or robots.txt--throw 'em in! :)
Tip #2: You can use wildcards in robots.txt, and patterns can end in '$' to indicate the end of a name. So if you don't want Googlebot to fetch any PDF files, for example, you could say
Disallow: /*.pdf$
Well whack me in the head...SO...
Disallow: /*.pdf
will disallow *.pdf*, that is, anything with pdf in the prefix of the extension is disallowed right? (filename.pdf and filename.pdfx would be disallowed)
and
Disallow: /*.pdf$
will disallow *.pdf only (filename.pdf is disallowed and filename.pdfx would be allowed)
I would also like to add a link to a thread about if-modified-since: [webmasterworld.com...]
And lynx, that's a great point, this way you even support some screen readers as well as the bots :)
/claus
That in turn lets the bot download more pages and save bandwidth. I highly recommend that you check to see if your server is configured to support If-Modified-Since. It's an easy win for static pages, and sometimes even pages with parameters can benefit from IMS.
Would it be possible to elaborate as to how Googlebot handles first time queries with respect to If-Modified-Since and "Last-Modified".
The RFC's imply that a client should only use this header if it holds an actual "Last-Modified" value received in response to a previous request - this is to avoid problems of time-synchronisation between hosts.
I've heard wind that Googlebot makes up an If-Modified-Since value that is ages ago and simply "hopes for the best", in which case i'm not sure I want to risk it.
I'm developing a forum system that has a "mod_rewritten" archive section that could certainly benefit both parties from using this header. The URI looks something like:
/archive/2003/07/21/1.html
and it would be a trivial matter to serve 304 ("Not Modified) if a request for this page offers an If-Modified-Since date greater than the date to which the page refers. Not sure I want to risk it though...
Will Googlebot take the cookie and use the session id throughout its session?
Thanks
Allthough i'm not GoogleGuy (honestly! ;) i can confirm that Google doesn't take/read/stores cookies at all.
athinktank, most session based software packages (forums, cms' etc.) put the session id into the url if cookies are disabled at the client site. Therefor GoogleGuy said: turn off (did he say cloak off?) session id's if googlebot visits your site. Often this is done with a single "exclude user agent line" within your session creation code.
Reason is pretty easy to understand: session id's are unique numbers and therefor a page that uses session id's in the request uri will get duplicated again and again everytime googlebot initiates a new visit at a session based site.
how do I tell Google to read /blah, but to not crawl /blah?something=*
Thanks, and excellent post.
I have 2 varsions of a page:
[domain.com...]
and
[domain.com...]
how do I tell Google to read /blah, but to not crawl /blah?something=*
That should be covered with an entry
Disallow: /blah?something
in your robots.txt.
Never the less I just noticed that I fell off of the 1st page for my most popular keyword. So I have been loosing sleep at night thinking that I pissed off the googlebot when all I was trying to do was help it get through site and not get tangled in a web of SIDs. As a result I removed the "cloaked" page and now I am left with what I am sure will be a SID mess. Any suggestions? I am very new to the world of SEO so any advice will be greatly appreciated.
huhuh, wake up! ;)
Here's a ancient statement of GG (Dec 4, 2002):
Google and Session Killing [webmasterworld.com]
Everybody knows I'm pretty anti-cloaking, but WebGuerilla has already made strong points why it's okay to drop a session id for Google.<snip snap>
This is just my personal take, but allowing Googlebot to crawl without requiring session id's should not run afoul of Google's policy against cloaking. I encourage webmasters to drop session id's when they can. I would consider it safe. Fair enough?Hope that helps,
googleguy