Forum Moderators: Robert Charlton & goodroi
These mirrored flat pages pull from the same source material as the real site (pulls the content in via PHP includes), so the content is exactly the same.
In each of these flat pages we have a JS redirect in the head that points to the 'real' page on our AJAX site.
On the 'real' pages, we have no-script code that pushes people off to the flat pages.
Overall we're pretty happy with it, the flat site works great for users with NoScript or who have JS off for some reason. When we turn JS back on, the redirect kicks in and the 'real' site works fine.
My main worry is Google mistaking this approach as cloaking or a sneaky redirect. I'm aware that similar tactics are part of the black-hat playbook. Any thoughts on how we might mitigate this risk, or will Google be smart enough to see what we're doing? Has anyone seen Google mistake this approach as black hat sneakiness?
--Illah
This type of implementation relies on javascript redirects not triggering algorithmic filters (and that will likely be the case). But what happens if someone wants to bookmark a URL on the "real" site? What about incoming links? Presumably, user-created links will not point to a spiderable URL with the content linked to, but could point at any entry point on the AJAX-ed site.
I think you've a good chance of escaping algorithmic filtering, but without a great deal of care, pure AJAX implementations can end up like framesets - they fundamentally distort the model of the web that many users (and search engines) expect. And performance can suffer heavily as a result.
And speaking of UI, that's the reason for the flat mirror. It's kind of hard to describe without showing you (it's not public yet), but you can think of it as an app-like interface. The site is for a creative agency so standing out from the crowd was very important.
That UI is what led us to AJAX...new page loads would have mucked up the interface, and also would have made the content more difficult to manage.
So you think we have a good chance of escaping the filters? That's my main worry. I know Google has always treated mirrored content as a grey area, simultaneously recommending it at the conferences but then keeping a close eye on abuses...
--Illah
You need the new "canonical" tag here for sure, but if you use that, you need search engines to spider both copies of the site.
I made a pact to never duplicate a site for different browser versions back in 1997, and I have resisted duplicating things for any other reason ever since.