Forum Moderators: phranque
This morning I added the following code to my httpd.conf file for Apache:
RewriteCond %{REQUEST_FILENAME} myfilename
RewriteCond %{QUERY_STRING} id=([^&;]*)
RewriteRule ^myfilename.cgi$ myFileName.cgi?id=%1 [R=301,L]
The purpose was to redirect lower-case links to my mixed case file name. (I know, I know... never should have used mixed case to begin with, but it has been this way for 9 years now... =)
My question... Does anyone know if Google frowns upon a redirect like this? From their perspective, would they not consider case and treat this like I am redirecting a page to itself? If so, that might seem seedy or shady... and I would hate to be penalized for such a redirect when I'm really only trying to fix a case problem.
Any thoughts on this?
Thanks everyone.
It appears that your first RewriteCond is redundant -- Are you sure you need it when you're already testing the URL_path in the RewriteRule? Now, the URL-path and the REQUEST_FILENAME are *not* exactly the same thing, but unless you're using another RewriteRule that affects either of them, the first RewriteCond here should not be needed at all.
Jim
My original thought behind the first condition was to eliminate the evaluation of the rule for every request unless necessary. Since the second condition holds true for all requests, the rule would have to be evaluated every time. I was hoping by adding the first, it might save on some performance issues of executing the rule?
Not sure about that theory... but that was my theory... ;-)
Thanks again for the feedback. Right now I'm torn about implementing this since I found in my logs that I have tons of scraper programs from various IPs that appear to be trying to access my content via the lower-case version of the file name. So they are getting a page not found error.
So do I open the door for them while fixing a few bad inbound links, or do I just leave it as it is.
I'm leaning towards fixing the problem and letting those scrapers in. I figure what is a few more... if I already have tons scraping me... ;-)
I used to have tons of scrapers, and wasted far too much time threatening DMCA take-downs. But some well-selected IP range blocks, a user-agent whitelist, and some bad-bot-blocking scripts posted here on WebmasterWorld have slowed the scrapers almost to a halt. "A few more scrapers" is far too many...
Jim