Welcome to WebmasterWorld Guest from 184.108.40.206
Forum Moderators: open
There is a site build in an Ajax framework. The site is a RIA, with 1
pagerefresh. To allow people to bookmark and communicate different
'pages' (they're really states of the application) and to use the back
and forward functionality of the browser, a hash # is used in the url
+ extra information to determine the state.
There are 2 problems here from a SEO perspective;
the site isn't indexable, since the content is loaded with JS.
The urls arent crawlable because of the # in the url.
So what I'm thinking at this moment is building a 'shadow site', that
will be using the exact same content, only in html and via seo
Then I'd check whether a visitor is a SE or a reall visitor, and
rewrite the url accordingly.
Obviously this would be a cloaking method, which isn't allowed. But I
think the Search Engines have stated in the terms that you are not
allowed to show the search engines different content.
Since we'd be using the same content, only another technical way of
presenting it, would that be ok you think?
It's rarely a good idea to think you can outsmart the SEs - you may 'get away with it' for a while, but then when the catastrophe happens, you won't even know what it was that they sussed out.
SEO = design = SEO; if you get that wrong, most else is chasing your tail.
Since we'd be using the same content...
What's he getting away with?
Not that I'm against cloaking, but what I don't understand is why people design sites that aren't spider-friendly in the first place. If it can be done in a spider-friendly manner (and it can since you are able to cloak a spider-friendly version with the same content), why not convert the whole site to that version?
Why not just build your proposed shadow site, and forget about the version that has all the state information in the URL?
I disagree with the 'just design a spiderable site' mantra though.
I think a website should be designed for the users, the visitors are most important! We are doing so by designing a AJAX website, which is user friendly from a interaction design point of view. The downside of this method is that the search engines can't crawl/index it though.
That's why we are thinking of implementing the suggested method...
What I thought of just now is this:
Checking on the server whether the visitor has a browser that supports the AJAX framework, and give the visitor the AJAX site or the html one accordingly..
This would result in Opera users seeing the HTML site, but also the search engines beeing directed to this html site..
This should be ok.. shouldn't it?