Forum Moderators: open
When searching for a particular not-very-competitive term, Google listed a whole heap of similar sites like this:
keyword_param1.keyword_param2.keyword_param3.some-domain-name.com
All the listings where subdomains of some-domain-name.com. So instead of buying hundreds of different domains, they bought just one domain name and created millions of subdomains (or rather used a wildcard in DNS).
On closer inspection, it seems that all are being generated by the same CGI script, but instead of using filenames, directories and CGI parameters, the script is using the domain name as the input.
So all the "sites" are small. And all the "sites" are different (since the inputs are different). They each link to a handful of closely rated other sites. But there are millions of them, and a substantial number of them have made it into google.
In the case I found, it was all being generated from the Amazon AWS database. So each "site" had a reasonable amount of content, and was quite different from the other "sites". So it is easy to see how google was being fooled.
But it is basically just a regular, large, database-driven site (and in this case 100% affiliate content) that uses subdomains as parameters instead of paths and cgi paramters.
Is this a well known technique? It looks to be simple (wildcard in DNS), cheap (buy just one domain), effective (it fills up the google results) and not directly caught by the google guidelines? If they occupied a single domain/result in Google, I would have no problem with them or the site. But they are filling up the result list.
Growing
As you mentioned, the search results that domain and its sub-domains are dominating, must not be very competitive, as to hold the whole results list is quite an achievement.
If it were competitive, there would be many searchers and competitor webmasters complaining and posting to Google.
Also interlinking will most probably get them caught in the long run.