TheMadScientist - 5:09 pm on Nov 11, 2012 (gmt 0) [edited by: TheMadScientist at 5:37 pm (utc) on Nov 11, 2012]
The biggest technical difference, even according to protocol, between 301s and 302s is where the page is initially requested from.
A user agent should continue to request a 302 initially from the original source since the redirect is 'temporary' or 'undefined' (or insert another word of your choice for 'not permanent') and then follow the redirect if it's still present.
A 301 should be requested from the new source initially by user agents since the redirect is permanent. (<-- Google does not necessarily do this ... If they did, they would never know if a redirect has been removed or changed to a different location, so they don't even handle 301s according to strict protocol, but the people who 'tout protocol' as the 'be all end all' conveniently 'forget' to tell you that part for some reason.)
Since the 302 'bug' years ago, the handling (transfer of link weight, age, other associated data) to the best of my knowledge is very similar to a 301 (actually, afaik, it's exactly the same over time, but there's so many hair splitters here I don't want to say that directly, because I don't feel like arguing with them any more than I have to already, so I always 'qualify' my statement about the handling to appease them.)
I would guess you will see a longer 'lag' between the redirect being discovered and 'full trust' with the receiving URL replacing the redirected URL in the results using a 302, but eventually it will, and I would suspect what you are seeing as 'duplicate content' in the results is where the URL receiving the redirect (meaning yours) has already replaced the URL being redirected.
Even with a 301 there's an up to 3 week 'lag' in the redirect being fully trusted and URLs for differing results are not always replaced at the same time. I've had some cases where a 301 is used and a search for 'blue widgets' will show the URL I've redirected, but a search for 'widgets that are blue' will show the new URL. That's not 'duplicate content' it's simply that the change hasn't made it all the way through the system yet.
As far as tools go, ignore the HTTP/1.0 headers and go with HTTP/1.1. GBot (and all modern browsers) is an HTTP/1.1 bot and as noted earlier in this thread, HTTP/1.0 headers sent to the server are not the same as HTTP/1.1, so it could be either the server settings on the redirecting end or the server on your end or the antique request method (HTTP/1.0) that's beyond obsolete not being updated in the tool you're using since no one uses it these days.
The 'short version' of checking is:
If your browser is redirected, so is GoogleBot. If you want to know the exact status code of the redirect, just use the header check in the control panel here or something like FireFox live headers if you want to make sure there's not a chain. They're both reliable and nothing else is needed.
[edited by: TheMadScientist at 5:37 pm (utc) on Nov 11, 2012]