Forum Moderators: open
It has always been an excellent idea to rank LOTS of pages high for LOTS of kws and phrases (of course not all pages for all words :).
If you can create ranking reports you have not enough phrases targetted (-> DG)
you better have hundreds (maybe more) of good quality links coming into your site
Trawler, I think you're right, although I still feel like there is an OOP. Maybe it is the sum of various elements, sort of like what SlyOldDog was talking about with a "point system" in another thread a while back ([webmasterworld.com ]).
What I've seen is that the new crop of web sites ranking high in my category are there because they seem to be part of a directory and buried way down in the 4th folder.
Could this be the key. Do any of the seniors/mods reading this thread know for sure that this is the key and were just waiting for the realisation to dawn on the rest of us poor suckers?
I'm seeing this too, but not largely. With the abandoned site I posted about that came from nowhere to No. 1, the page Google picked to list is titled "second.html" but all the rest of the Top 15 in that particular category my old site's #1 in are all ****.****.com/ and many of the pages defaulted to are titled "index.htm" or "index.html".
I think that's also part of it's sudden appeal to the new filter/new algo. or whatever it turns out to be.. <
Interesting. For many of the sites that still rank well and seem to be the exception to the filter theory, these design characteristics are present.
___
I can sort of confirm that. We lost 22 sites to the Florida Butcher Job.
19 are pure affiliate sites, 1 is half & half, the last one is a straight paysite with no affiliate links but is in a highly competative market. Our other 20 or 30 sites weren't affected and they are all straight paysites.
If it is in fact the case that their aim is eliminate the affiliates from the top of the serps, they seem to be having pretty good success in the short run.
However, In the long run, the market will force them to do an about face. My best guess is that it will occurr AFTER the IPO. Not before.
In my industry - Travel
I just can't see the likes of Hotels.com, Expedia.com and all the rest contuining to advertise on and support a search engine that is locking out their affiliates. All of these companys were built by and continue to live and die by the strenght of their affiliate programs.
I am sure as soon as a viable alternative to google is online (for sure within 6 months) they will dictate the new rules to google in short order.
Loosen up PRONTO! or loose our business.
There is no way google could take a position in adverse to their major advertisers. It will not prevail, although once the Ivy League at the top cash out, who is to say they really give a damn
It would be interesting to see the performance stats for affiliate revenue share.
If the trend is down - if more and more people now go directly - the fate of the affiliate is in decline.
That is, until new competition enters the field, invites the affiliates, and the cycle starts all over again.
For instance, Hotels.com has over 30,000 affiliates, they are the largest in the travel arena.
Under the lock-out a reasonable expectation would be that the player with the largest affiliate network will loose the most marketshare. Of course, the smaller players will pick up sales somewhat proportinally.
It for sure will be an interesting time!
"Interesting. For many of the sites that still rank well and seem to be the exception to the filter theory, these design characteristics are present."As always have been.
Actually, in my category, the sites that were/are still Top 10, (speech/voice recognition microphones) tend to be sites with a large amount of links to pages (both "off-site" and "in-site", it seems) with content that corresponds to the field. This is new, and seems to be prevalent among all five primary search terms for this field.
And I'm also noting that some are very optimized, some aren't; but as a rule, those that are "honest", and truly relevant to the field have moved up.
I also notice that rankings are not as PR related as before.. Pages with high PR are out-rankedby those with lower PR, in many instances..?
.
Claus' semantic explanation also demonstrates why directories are ranking so highly in Google these days--by their nature, any half-decent directory will include a wide variety of sites and site descriptions that end up containing most or all of the important terms about a topic.
The main problem I'm having with this shift is that sometimes a proper name or brand name winds up associating words with each other which, taken singly, are not necessarily associated with each other. For example, I could have the single definitive site about spears on the net, with pictures of spears from different time periods in every country on the planet; Google probably would not realize that my site's extensive information on lances, javelins, spearfishing, hunting, throwing, straight spears, pronged spears, and so forth constituted a semantic web around the word "spears." They would, however, notice my site was lacking the word "britney," which is closely associated with spears, and frown upon my site as being too narrowly focused on spears, preferring to show users the nice broad sites with plenty of information about britney AND spears. An extreme example, but you see where I'm going with it. There are a lot of city names, for example, that are actually nouns people might be looking for outside the context of the city. Pages about such topics that don't have references to the state that city happens to be in aren't going to be found by searchers who don't explicitly type "-state." This is the only problem I've observed to be worsening in the educational searches I've seen since Florida, and I'm hopeful that it will fix itself as Google's algorithm learns semantic webs for more of the lesser-known search terms.
Hilltop is based on an earlier work by Jon Kleinberg. "Authoritative Sources in a Hyperlink Environment"
On a broad based search term a set of Authoritative Sources are located by looking at the 200 highest-ranking pages for that exact search term (allow stemming if the exact term is not found).
If a large number of pages within a single domain point to the same page this is seen as a mass endorsement, advertisement or some sort of collusion among the referring pages – e.g. the phrase "This Site Designed by.." and a corresponding link at the bottom of each page in a give domain. [A domain as defined in HillTop is when the left most parts of the URL match, I do not know how pages on sites like geocities would be handled]
To eliminate this a parameter is added to only allow up to x number of pages from a single domain to count as a link.
I am totally confortable the fact that we are their QA department. This site offers a great service to SEO's but I think it provides a bigger service to Google.
They counter our every move! And how hard can it be when we tell them exactly where the loopholes are.
And well they should! Really, there are legitimate reasons for one domain to link to another site 400 times (one of the educational sites I work with does this; our users appreciate the links), but why should that count 400 times for Google's purposes? As long as they just *ignore* extra links rather than *penalizing* anybody for them I think that's a fine idea.