The purpose is to detect mobile devices using a technique beyond useragent strings.
My biggest concern is around search bots. Would a Googlebot ever send a header like this and accidentally get past this check?
I want to avoid any mess or confusion in regards to my site's pages in the index. Currently I'm NOINDEX'ing all my mobile pages, but I also don't want to send the non-mobile googlebot into my mobile site by accident.
Does anyone see any problems with using the above header check along with user-agent checks for determining mobile devices hitting my site?
I'd suggest that you add a negative-match RewriteCond to exclude all known non-mobile 'bots by user-agent then, because who knows if the major 'bots may decide to put one or both of those MIME-types in their Accept headers.
Accept Headers are good for determining that a user-agent can handle a specific type of content, but not so useful for determining "what kind of device the user-agent actually is."
Great idea! I really only care about the big three. Thanks Jim.
This mobile thing is giving me headaches. Everyone seems to have a different opinion on best practices. One confusing thing is whether to NOINDEX all the mobile pages. That makes sense until you think about the mobile googlebot crawler. Are we shooting ourselves in the foot by telling the mobile crawler not to index the pages as well? In my mind there should be only one set of my pages in the index. Mobile is just a different presentation of the pages. My stance right now is NOINDEX and submit a mobile sitemap.