I'm having trouble using server-side includes for a drop down menu on my site. When the robot traverses a page it reads everything in the drop down menu and since the menu is on every page, it lowers the relevancy of search results.
If I all a line: Disallow: /include.inc
to my robot.txt file, will that block the robot looking at the inluded file on the page or will it still read the inlcude after the code has been processed.
If that does not work, can anyone suggest a way to get the robot to not look at a drop down menu in an include other than using client-side scripting.
No that won't block it. Your server will include the file (say index.shtml), send it back to the spider and the spider will index the full file (index.shtml has not been excluded by robots.txt). The spider knows nothing about the includes, as your server did it prior to 'despatch'.
Client side is one of the few possibilities, but this is cloaking. Google do not like cloaking and can (and will) drop you out of their index for this tactic.
Some people seem to think that using CSS to position the menu at the location you want, whilst having the drop-down menu code at the end of your html file, will help. I haven't tried this yet, so I could not comment on if it is true. It uses the assumption that search engines give more weight to words higher up on the html code and that they only index the first fixed amount of the code, ignoring the end of the code.