A friend asked about how well their SEO provider was doing so I went to check it out.
It looks like really sloppy cloaking job. The Google cache shows what should be the cloaked page. The publicly displayed pages look very similar to the actual client site, but when one clicks "view source" it shows the code the spider sees. When you have a page displayed in the browser, how is it possible to hit view source and see the code that does not make up that page you are viewing, but is fed to Google and the rest of the engines?
The cloaked pages showing up in Google cache are just a bunch of link spamming techniques with lots of anchor text pointing back to the doorway it's located on and of course repeated link text to the other couple hundred doorways in the network. Seems like if they are gonna cloak, they ought to hide the spider food from the view source button and the Google cache, no?
Yep, a modified version of he old frame trick - just calling an external .js file with the frame parameters in there. Shows the spider one thing as it skips over the .js file, while the browser interprets it and pulls the real site into the browser. The more things change the more they stay the same:)
advantages - well it got a lot of pages indexed and ranked top 1-30 for not very competitive (but targeted) phrases. The code setup makes the pages distinct enough so they don't get weeded out as being dups except by human review.
disadvantages - duplicate content in top results :( the real site already ranks better than this massive frame site (in most places) so it may get flaged through human review and tossed. No directory except for LookSmart:) is likely to take the mirror. It's only showing up in one engine, except for a few pages in the BOW. ODP shot it down. It's not going to get much link pop, likely to be the reason the real site without all the frame stuff and link spam is ranking better on most phrases.