|is this cloaking? or just smart seo - robots control?|
| 9:54 am on Jul 8, 2005 (gmt 0)|
Could you guys give me some adive?
we have a travel site which has about a thousand pages, and has been online in the search engines for about 5 years. it uses a CMS, one that outpts in html.
we have never done any real seo, apart from changing a few tags around since last year. however it ranks ok for the keywords we have chosen.
we are going to do a total re-design. the new design proposed has a lot of rich media content, is still using a content management system, and will be very graphic oriented.
if we made a duplicate version for accessibility reasons , in plain html, exactly like the bbc have done . (u can see this under 'text only' version on the bbc main page - bbc.co.uk/home/today/textonly.shtml) and then put a robot exclusion tags on every page of the rich media site. could this work?
so in effect we have a index splash page. 2 entry points on that page. main visitors will go to the colourful graphic pages. that link blocks the search engines. they follow the links to the text only pages. previous link popularity still going to main index page, + new links deeplinking to text only pages.
is this considered a kind of cloaking? or is it just an alternative way of doing things. we would have to have the accessability part anyway for legal reasons as it is a brand site. this way would just be taking advantage of seo at the same time.
we would also need to have the robot exclusion tags because of the duplicate content. this way there would only be 1 set of copy indexed. We would just be choosing the text site to be indexed instead of the rich media site.
do you or do the search engines consider this a form of cloaking? if they do, how and why do huge sites like bbc and cnn do it?
any help, info or ideas would be appreciated thanks.
| 2:09 pm on Jul 8, 2005 (gmt 0)|
It is not cloaking in any sense.
One comment, though: what makes you assume visitors will enter through the index page? If only the text pages are being indexed, won't visitors come through the text pages?
| 2:13 pm on Jul 8, 2005 (gmt 0)|
thanks for the reply.
yes we are expecting some visitors to come through other non index pages. so the html cant look really really basic, and will try and get them back onto the main site.
in regards to 'not cloaking in any sense' i thought the definition of cloaking was showing search engines one thing and real visitors another.
would this not come under that category?
| 2:21 pm on Jul 8, 2005 (gmt 0)|
|we are expecting some visitors to come through other non index pages |
I would suspect the 'some' might be 'most', at least most SE traffic. I am involved with a few speciality travel sites, which tend to have a lot of content, articles etc. On average something like 80-90% of traffic from search engines comes in somewhere other than the index page. This is standard accross several sites of a similar nature. Many of those people never go near the index page.
Every site is different - but I would be inclined to check this in your logs and see where people are entering.
| 6:10 pm on Jul 8, 2005 (gmt 0)|
|in regards to 'not cloaking in any sense' i thought the definition of cloaking was showing search engines one thing and real visitors another. |
would this not come under that category?
No. You aren't doing any kind of "switcheroo", and showing one thing to search engines and another to humans. You simply have two versions and are telling the search engines not to index one of them. If a human clicks on a search engine listing, they will be taken to the text version of the page that was indexed. You are going to try to get them to click on an index page link to get them to your index.
Now, if you were doing some kind of server-side scripting which made it so if a human clicked on a search engine listing, and instead of seeing the text version of the page, they saw the multimedia version of the page, that would be cloaking. This could be done with HTTP_REFERER based cloaking, where the script would insert a meta-refresh tag into responses where the request had a referrer that matched certain domains, say google.com or yahoo.com. Or, you could simply serve the multimedia version of the page under the original text only URL.
| 11:30 am on Jul 27, 2005 (gmt 0)|
The BBC website is rich media and text. But Google has no problems indexing the content. Don't worry about sending an SE to the text version because you'll end up serving visitors that content. Also nowhere is it written that just text = high position. I would exclude the text version from GOogle with a robots, unless you want to sniff for the browser when the visitor hits the text page and then redirect. But then you might have someone report you for cloaking and someone who doens't understand you exacting reasons, just banning you.
| 10:57 pm on Aug 3, 2005 (gmt 0)|
If I understand your post coirrectly, I would only be concernerd with acruing a penalty for duplicate content. Easy fix: Put all of your text-only pages in one directory and the others in another directory, then "disallow" ALL (*) robots from one directory or the other in "robots.txt."
Then again, I'm quite paranoid about such things. Could be nothing to worry about at all.