No need to capture if you won't be reusing. A simple .? (unanchored) for the pattern is all you need.
So I've tried redirecting such requests:
RewriteRule ^/?$ /index\.php [L]
That's not a redirect. It's a rewrite. And it's only doing what mod_dir would already be doing on its own: serving up the named page "index.php" when there is a request for the root.
All the RewriteRules in the world won't stop requests; they only affect what happens to requests after they've been received. At the next stage, all the lockouts in the world won't reduce log size. (Firewall, maybe. If it reaches the server, it's logged. You can change the logging level of error logs, but don't try to mess with overall access logs. That's information you need.)
Besides, you can't simply block requests for the root. Humans ask for it too-- and you can't identify humans ahead of time, because requests for supporting files come in after requests for the page they belong to. And what about search engines?
Or did you mean something else when you said "empty"? Requests without a User-Agent, maybe?
Lucy24 is right - don't do it. If you access your own site from a bookmark in your browser, look at the logs and you'll see that you would be blocking yourself if you block those empty "GET / HTTP/1.1" requests.
Edited to say what I meant- 1.1, not 1.0 block all the 1.0 requests you want, so long as it is not a WP site (that uses HTTP/1.0 to run the cron updates).
[edited by: not2easy at 3:07 pm (utc) on Apr 22, 2014]