| 5:02 pm on Jul 30, 2003 (gmt 0)|
I would use form buttons instead of hyperlinks
| 1:09 pm on Aug 1, 2003 (gmt 0)|
Good suggestion, however for this particular purpose text links look much better, and afak they don't go well with a form...
| 1:51 pm on Aug 1, 2003 (gmt 0)|
why don't you block the spider from following the link in the robots.txt?
| 3:48 pm on Aug 1, 2003 (gmt 0)|
Well as far as I know I can only block the spider from followiing the links on the page alltogether (nofollow-tag) or from accessing the files in a certain directory (robots.txt). In this case however those are text links looking like /mypage.php?vote25¶=12 ...
Maybe this is a case for cloaking specialists?
| 12:29 am on Aug 6, 2003 (gmt 0)|
Some more input on this issue would be greatly appreciated. Thanks!
| 4:00 am on Aug 6, 2003 (gmt 0)|
Maybe you could code it in manually by ip address. Make a small database of all the search engine ips that are bothering you and put them in there.
Then when its time for people to vote, check their ip address against the ip database, and if one row matches, do not allow them to vote. If you would like more help on it this way which imho is the best way, post back or sticky me.
| 9:07 am on Aug 6, 2003 (gmt 0)|
Use a form button with an image as the button:
<INPUT TYPE="IMAGE" SRC="images/mybutton.gif">
You can then make an image which looks exactly like the current text link
| 10:00 am on Aug 6, 2003 (gmt 0)|
Hm... as I said, for this particular purpose text links look much better... I guess it's not possible using a form with a text link?
The IP addresses seem like the best approach right now, on the other hand it would something I would need to check on a regular basis. Also I'm not quite sure which spiders are the culprit - I caught Googlebot in the act, but I'm not sure who and how many the others might be.
| 12:30 pm on Aug 7, 2003 (gmt 0)|
You could make it look exactly the same as the link text, there would be no visual difference at all.
| 12:34 pm on Aug 7, 2003 (gmt 0)|
You could maybe do some trickery with robots.txt to block the pages in question. Have a look at [webmasterworld.com...]
You can use pattern matching in robots.txt as follows:
Patterns must begin with / because robots.txt patterns always match absolute URLs.
* matches zero or more of any character.
$ at the end of a pattern matches the end of the URL; elsewhere $ matches itself.
* at the end of a pattern is redundant, because robots.txt patterns always match any URL which begins with the pattern
| 10:03 pm on Aug 7, 2003 (gmt 0)|
Mmm I'd rather not try any trickery with the robots.txt file when it comes to excluding files only because of different parameter strings. Last time I did this hundreds of my pages accidentially flew out of the index.
Krapulator that _would_ be perfect but I have no idea how to do it. Any concrete advice?
| 3:07 pm on Aug 8, 2003 (gmt 0)|
Sticky me the urls that the current text links point to when you vote, and I will send you a concrete example.