aakk9999 - 12:20 am on Feb 25, 2013 (gmt 0)
Thank you on commenting Robert.
That said, in this particular case, is there any reason that a crawler like Google needs to be aware of the header status at all once you've blocked the crawler?
True, the response code is not important right now as the crawlers are blocked. However, if I ask for g1smd solution to be implemented, then I somehow think it is not right to return 200 if the page content says "Your session has expired" instead of showing the page content.
Yes, I know, but I have witnessed often enough robots.txt getting "lost" from the server, or being saved as UTF-8 (which makes Google not understanding it), hence the noindex fallback.
Using both robots.txt and noindex, as I know you're aware, negates part of what noindex does...
I'm assuming here that somehow the user's data is saved. If not, and "start to create" means re-entering product choices, I think I would look into re-engineering the cart, or, at the least, extending the 20 minutes. I can imagine that shoppers whose data is lost by a 20 minute distraction would often decide to go elsewhere.
Without going into specifics, products the site sells are unique, so at the time of adding the product in the cart, this particular product is "reserved" for this session and nobody else can buy it (as there is only one of them exactly like this). So whilst it is possible to re-create the cart (because server knows the old details), we cannot do this as there is no guarantee that this particular "product" is still available, as it is possible that between the time the session has expired and the time the user has been told that their session has expired, this particular product may be sold to someone else. This uniqueness of products is in fact the main reason why it is required that session expires after certain time - so that products in abandoned carts can be "released" and made available to others to purchase.
Since products are quite expensive, it is rare for the cart to have more than one product in it.