Looks like you checked the wrong variable in your second RewriteCond. You likely want to check $1 itself or %{REQUEST_URI} for loop-prevention (to prevent spider_delegate.php from being rewritten to itself recursively, as it is doing now). The two methods are identical in this case, except for the presence of a leading slash on the value in %{REQUEST_URI}.
You may wish validate gBots first (as shown in second code snippet below) before doing this rewrite.
RewriteCond %{HTTP_USER_AGENT} Googlebot/
RewriteCond $1 !^spider_delegate\.php$
RewriteRule ^(.*)$ /spider_delegate.php?url=$1 [L]
Note that it is not necessary to put ".*" at the start of a pattern if that pattern is not start-anchored with a "^" and it is similarly unnecessary to put ".*" at the end of a pattern if that pattern is not end-anchored with a "$". See the regular-expressions tutorial cited in our Apache Forum Charter.
Literal periods (and other reserved characters) in regex patterns should be escaped with a "\" as shown.
Neither RewriteRule patterns nor RewriteConds examining %{REQUEST_URI} will "see" query strings; These are data appended to the URL, and not considered part of the URL itself when handled by mod_rewrite.
Optional: Block spoofed Googlebots with simple validation before doing the rewrite:
# Return 403-Forbidden to GoogleBot spoofers except for requests
# for robots.txt and the custom 403 error response page itself.
RewriteCond %{HTTP_USER_AGENT} Googlebot/
RewriteCond %{HTTP:From} !^googlebot\(at\)googlebot\.com$
RewriteCond $1 !^robots\.txt$
RewriteCond $1 !^path-to-custom-403-error-page\.html$
RewriteRule ^(.*)$ - [F]
If you don't have a custom 403 error document declared, you won't need the exclusion for its URL-path.
Jim