Robert_Charlton - 8:34 pm on Feb 24, 2013 (gmt 0)
Just when I thought there was nothing left on the internet for Microsoft to break, along comes another example.
LOL. A classic comment, worthy of framing. ;)
That said, in this particular case, is there any reason that a crawler like Google needs to be aware of the header status at all once you've blocked the crawler? It's not like a 404, where the "not found" status is useful information... and where a 200, or even the absence of a 404 response, would create problems.
Google might make use of a 408... but basically you don't want Google to index whatever url/page it is that the shopping cart returns after a session time-out... and chances are your budget isn't large enough to re-engineer the cart. So I'd say that what you've done makes sense.
...we have stopped this script being accessible to Google via robots.txt and as a backup added noindex in the head section.
Using both robots.txt and noindex, as I know you're aware, negates part of what noindex does, which is to keep references to the url out of the serps... but chances are in a cart that you're not going to have that problem anyway. As a backup, in case robots.txt fails, "noindex" is probably not a bad idea. If you see any url-only results in the serps, then you should drop the robots.txt.
This is needed since after 20 minutes the cart user has been creating "expired", so that the user is aware they need to start to create cart again.
I'm assuming here that somehow the user's data is saved. If not, and "start to create" means re-entering product choices, I think I would look into re-engineering the cart, or, at the least, extending the 20 minutes. I can imagine that shoppers whose data is lost by a 20 minute distraction would often decide to go elsewhere.