As this is primarily for Google's benefit, it's been suggested by some infra-types that I *only* return the Vary: User-Agent header when a Google user agent is detected (Googlebot/MobileBot/AdBot).
So here's the thing: as the server header response is not “content” per se I think this could be an okay solution, though I'd like to throw it out to the esteemed WebmasterWorld community and get some additional feedback.
You guys see any issues/problems with implementing this solution?
Msg#: 4669157 posted 3:33 pm on May 8, 2014 (gmt 0)
To be honest I rarely feel 100% comfortable with any solution because Google seems to be moving towards using non-Googlebot methods to index content. This makes me worry about triggering a false positive with their spam filters maybe not today but who knows what tomorrow will bring.
For my own sites, I try to make the content as similar as possible for all devices & users to avoid or at least minimize the need of using these more complicated headers. The more complicated you make things, the more chance there is for errors or misunderstandings. Depending on your users and situation, you might not even need a dedicated mobile version.
Let's say you aren't lucky and you need to have different versions. I would not limit the vary header to Googlebot because as mentioned IMHO Google seems to be paying more & more attention to non-Googlebot sources.
Msg#: 4669157 posted 5:36 pm on May 8, 2014 (gmt 0)
I use the OP mobile tactic, funny enough I was only thinking a few hours back today how I wish I didn't need to use it as it does slow the response.
Is the correct terminology of this approach "responsive" or as I prefer "adaptive"? I think of "responsive" as a design that fits any screen width, rather than delivering a unique desktop/mobile version that maintains the same content but servers different mark up (less bloated for mobile, less scripts and such)?
Msg#: 4669157 posted 9:47 pm on May 8, 2014 (gmt 0)
To clarify: I wouldn't say Google is indexing via non-Googlebot ways but they seem to be moving towards discovering content via non-Googlebot ways. Some potential but unproven ways for Google to do this would be to monitor data served to Chrome users, Android devices or use an non-Google identified robot. Sorry for any confusion.
Msg#: 4669157 posted 8:54 am on May 9, 2014 (gmt 0)
the practical purpose of the Vary: User-Agent header is related to data compression of the requested resource.
it's not a matter of serving different content to different User-Agents, it's a matter of whether the text resource served is compressed or uncompressed.
if the resource served is compressed you will need a Vary: Accept-Encoding header, which means a proxy server will cache both gzipped and uncompressed versions and serve the correct version based on the Accept-Encoding request header. sending the Vary: User-Agent header with responses that may be compressed means a proxy server will cache both gzipped and uncompressed versions and serve the correct version based on the User-Agent request header.
no Vary headers should be sent for MSIE-sourced requests.