Welcome to WebmasterWorld Guest from 220.127.116.11
joined:Feb 18, 2013
The new URI is not a substitute reference for the originally requested resource.
Why are you coding a actual response code ? I'd direct the user to a normal session timed out page
Just when I thought there was nothing left on the internet for Microsoft to break, along comes another example.
...we have stopped this script being accessible to Google via robots.txt and as a backup added noindex in the head section.
This is needed since after 20 minutes the cart user has been creating "expired", so that the user is aware they need to start to create cart again.
That said, in this particular case, is there any reason that a crawler like Google needs to be aware of the header status at all once you've blocked the crawler?
Using both robots.txt and noindex, as I know you're aware, negates part of what noindex does...Yes, I know, but I have witnessed often enough robots.txt getting "lost" from the server, or being saved as UTF-8 (which makes Google not understanding it), hence the noindex fallback.
I'm assuming here that somehow the user's data is saved. If not, and "start to create" means re-entering product choices, I think I would look into re-engineering the cart, or, at the least, extending the 20 minutes. I can imagine that shoppers whose data is lost by a 20 minute distraction would often decide to go elsewhere.