| 3:33 pm on May 8, 2014 (gmt 0)|
To be honest I rarely feel 100% comfortable with any solution because Google seems to be moving towards using non-Googlebot methods to index content. This makes me worry about triggering a false positive with their spam filters maybe not today but who knows what tomorrow will bring.
For my own sites, I try to make the content as similar as possible for all devices & users to avoid or at least minimize the need of using these more complicated headers. The more complicated you make things, the more chance there is for errors or misunderstandings. Depending on your users and situation, you might not even need a dedicated mobile version.
Let's say you aren't lucky and you need to have different versions. I would not limit the vary header to Googlebot because as mentioned IMHO Google seems to be paying more & more attention to non-Googlebot sources.
| 4:57 pm on May 8, 2014 (gmt 0)|
What is a non-Googlebot method to index traffic?
| 5:12 pm on May 8, 2014 (gmt 0)|
If you serve the same code to the different browsers/devices, you don't need the 'Vary: User-Agent.' Googlebot detects responsive websites without that header.
| 5:36 pm on May 8, 2014 (gmt 0)|
I use the OP mobile tactic, funny enough I was only thinking a few hours back today how I wish I didn't need to use it as it does slow the response.
Is the correct terminology of this approach "responsive" or as I prefer "adaptive"? I think of "responsive" as a design that fits any screen width, rather than delivering a unique desktop/mobile version that maintains the same content but servers different mark up (less bloated for mobile, less scripts and such)?
| 7:52 pm on May 8, 2014 (gmt 0)|
According to Google, css only solution is 'responsive,' and user-agent based serving is 'dynamic.'
| 8:15 pm on May 8, 2014 (gmt 0)|
Thanks for the clarification levo!
| 9:47 pm on May 8, 2014 (gmt 0)|
To clarify: I wouldn't say Google is indexing via non-Googlebot ways but they seem to be moving towards discovering content via non-Googlebot ways. Some potential but unproven ways for Google to do this would be to monitor data served to Chrome users, Android devices or use an non-Google identified robot. Sorry for any confusion.
| 8:54 am on May 9, 2014 (gmt 0)|
the practical purpose of the Vary: User-Agent header is related to data compression of the requested resource.
it's not a matter of serving different content to different User-Agents, it's a matter of whether the text resource served is compressed or uncompressed.
if the resource served is compressed you will need a Vary: Accept-Encoding header, which means a proxy server will cache both gzipped and uncompressed versions and serve the correct version based on the Accept-Encoding request header.
sending the Vary: User-Agent header with responses that may be compressed means a proxy server will cache both gzipped and uncompressed versions and serve the correct version based on the User-Agent request header.
no Vary headers should be sent for MSIE-sourced requests.
| 12:21 pm on May 9, 2014 (gmt 0)|
Very useful phranque, thank you!