Forum Moderators: open
IMHO, the fact that they are doing this turns them from a SE to a directly, they can just rename them self to buisness.com or something like this
There is no way to stay current, in some areas where there was hand coding for the last two years the top results are old and completely irrelevant and have contradicting facts and data to the latest updated research that is known on the subject.
YAHOO - WE WANT TO SEE YOU AN SE AND NOT DIRECTORY!
If you are not up to the task them let all your shareholders know about it or use other company's technology to achieve it, just do not deceive the general public, manual editing is admitting in public that your current algo sucks ....
/BP
There's a big difference. I'm convinced that Yahoo is manually banning sites to such a degree that the SERPs appear to be tailored. I'm convinced Y uses their algo to give favorable rank to types of authority sites(.gov/DMOZ/etc). I do not believe people at Y will look at a SERP then force a particular site up or down a result or set of results.
Current possibilities IMO, include:
1) they are doing more hand tweaking
2) their authority/hub analysis, or general link analysis including measures of importance, is not there yet by a long shot, causing the perception that more hand tweaking just occured (since some sites are there that may not seem to deserve to be there, algorytmically)
3) a new authority factor has been introduced for some sites, either manually or algorytmically.
I lean towards 2 or 3, only because if I were managing an SE, hand tweaking is a road I would not be inclined to go further down, even if I were doing some of it now. It's way to labor intensive and would need to be constantly monitored and updated. They're moving away from the directory business, not towards it.
Besides, it's probably no coincidence that this perception is occuring coincident with the roll out of their first large scale change in the way their algo works since they first went live with the proprietary Y! Search product. Which suggests to me at least that what we're seeing is more likely related to their new algo/filters, than to a change in philosophy leading to more hand tweaking.
I always try to balance logic and guesswork with observation, and ask, does this conclusion makes sense in light of what they are likely to be doing in the real world? They have every incentive to get it right with the algo, and plenty of incentive to avoid more hand tweaking. Hand tweaking is simply not practical or efficient; quite the opposite...it's time intensive and expensive, and they know this as well as anybody from their directory experience.
I'm not saying hand tweaking is not valuable, only that from a business perspective, it's not desirable. If I were them, I'd be trying to figure out how to move away from it, not further into it.
They don't need to hand do every query. Only the first few results in "problem areas."
>>>would need to be constantly monitored and updated.
Well thats a big part of the problem. We know for a fact that they have been hand picking some serps well before they rolled out their engine post google, and the results NEVER change for these results. EVER. The don't monitor and update them, they just rot.
The hand results are the best search results on the Internet by far. Yes, it would certainly be a good thing if they were checked/reconsidered monthly or so, but still, every example I've seen the results stand the test of time because true authority sites that are in for the long haul are the ones listed... even if garbage peddlers won't like that.
And just to be clear, I'm not saying the hand results are always "right", I'm saying they are always much "better" than the algo results on all three engines.
We know for a fact that they have been hand picking some serps well before they rolled out their engine post google, and the results NEVER change for these results. EVER. The don't monitor and update them, they just rot.
Yes and it has never been particularly hard to spot.
While I am guessing that the perception of more hand coding is really an algorytmic thing - in part because it comes with the new SERP's - I suppose it's possible that they've simply added more hand edits for now, to get them through the intro phase of this new algo.
But whether it's a results of the new algo, or an overlay to the new algo, the issues of expense and constant monitoring are the other part of why I'm guessing that what we're seeing now is more algo related.
There's a lot of evidence that they've got the auth/link analysis thing all out of kilter so far. (They're also showing the wrong page from any given site for a lot of searches when the site has a better page. And the dup filters seem off a bit too.)
Tim just said two days ago in an updated weather report that there would likely be more tweaks to this (very new) algo in the coming weeks. I for one would not be at all surprised to see them trying to address some of this stuff, which may make the perception of more hand tweaking go away. I could easily be wrong though.
I do think removing spam sites is a great idea. For example, the websites that have vague information about every city in the US/World. They pull a vague map, throw the cities name on the page a few times, add population info, pull some links from dmoz, and seo to death. At the same pushing relevant results to the bottom. Removing one of these made-for-adsense sites and all of it's subdomains would clear 50,000+ useless results off the serps and would only take a human editor less than a minute to do. I think using 1 minute of a humans time to remove 50,000 useless results is a great investment. Imagine how many a group of 20 hand editors could remove in a week.
Only problem is this would put a lot of SEO companies out of business, people would actually have to start creating usefull content filled websites to be in the serps.
Anyway for those who doubt at least you can't say they haven't looked into the matter [webmasterworld.com]
We know for a fact that they have been hand picking some serps well before they rolled out their engine post google, and the results NEVER change for these results. EVER. The don't monitor and update them, they just rot.
Yes, of course.
It is not algo related BTW, or else the results would look similar for terms like "widget" and "online widget" or "widget information". But no, they only pick select terms. Funny part is way before yahoo became ink or whatever, they had certain results locked in for terms - choosing them over the google results they were paying for. The same results EXACTLY still exist for these very terms...
I am with broadprospect here. Business.com sucks and this is not the competition webmasters were hoping for.
If I remember correctly hand coding was even used when Yahoo had Google results. I think jewelry was one of the first non-adult to get hit.
Yup, and the same directory sites still sit there for jewelry without an inch of movement.
Reason I ask about the apparently new hand edits is that in some of our cat's, what appear to be new hand edits (by virture of not deserving to be there algorythmically), are, I think, sites that are getting there because they've either been artifically boosted in an overall sense (not by search term, but more like a PR bonus), or are getting more weight algorythmically than what they should be getting because the link analysis ain't working particularly well yet with this new algo.
The thing is, many of these sites are showing up in *lots* of places.
It's possible that we're talking about two different sets of sites...or even three issues: 1) hand edits by search term; 2) manually assigned lifts in overall 'importance' on a site by site basis (manual yes, but not by search term); 3) haywire link analysis.
I, like many, am seeing tons of new appearances for Ab*ut.com, Busin*ss.com, IYP's, some .gov's, and various other similar kinds of sites. Some of this is, IMHO, almost certainly not hand editing by category but one of two other things:
1) They've manually assigned higher status to a select group of sites (manual, yes, but again, more like a PR boost that might help the site on any given search) and that is causing those sites to do better on a whole range of searches.
2) They've got the algo way wrong and some mega sites are getting way more importance - including on unrelated searches - than they should. A few of us just the other day at SES were looking at an example where a prominent .gov site was ranking on the first page of results for a two-word term not at all related to that site. The page being shown contained both search words, but never side by side, and was not in any way related to the search. Lots of that going on right now, and it is definitely not a hand edit of the site for that search term.
So in at least some of these cases, they are either goosing some mega sites in some broad way not specific to given searches, or it's an algo issue having to do with auth sites. That does not exclude of course the possibility that they are also doing more hand editing.
it is not algo related BTW, or else the results would look similar for terms like "widget" and "online widget" or "widget information"
[edited by: martinibuster at 9:45 pm (utc) on Aug. 10, 2005]
[edit reason] clarification [/edit]
Steveb is right on. This is not news at Yahoo.
I'd add that it is their SE and Yahoo can do whatever they want with it to get it to perform in whatever way they believe is most useful to their customers, ad buyers and stockholders.
It's 2005, not 1995. Neither Yahoo nor Google has to prove anything to anyone anymore. Except that they can make money.
"jewelry" was cleaned up about a week ago or so, after being ridiculously skunked up by AdSense(build for) Sites an EBay Feeds. I am glad it happened though...
I also think that this is just a beginning for the bigger change before the real traffic of the shopping season hits the fan. Its all in the game plan, you will see..
Caveman --- They're also showing the wrong page from any given site for a lot of searches when the site has a better page ----
I see a lot of this happening, but random, very random… One page “slightly” optimized for “Blue Widgets” ranks for “Blue Widgets” and “Green Widgets”, where “Green Widgets” page has its own destination on the site and ranked as #1 for its keyword, and now completely gone from SERP. Very Odd….
But whether its hand coded or new ALGO(better one) results look much cleaner, at least in the niche i monitor.
One page “slightly” optimized for “Blue Widgets” ranks for “Blue Widgets” and “Green Widgets”, where “Green Widgets” page has its own destination on the site and ranked as #1 for its keyword, and now completely gone from SERP. Very Odd…
That's not a hand edited serp, it's an algo anomaly. But it is something caveman has been on about for the last week or two. It's almost like Yahoo is giving a sitewide credit to the home page and making the user peck around to find the right page instead of sending the visitor directly to the page with the answer.
I'm still not convinced that what everyone is seeing is in every instance an example of a hand edited serp. I bet we can't even agree on what a hand edited serp looks like.
How do you identify a hand edited serp? What are the symptoms?
It was enlightening to see this discussion here, because for the last weeks I've been wondering why a site that doesnt seem to have any revelance to the keyword you search for its ranked on #2. The funny thing about this result is that the keyword doesnt appear in any place, not on the title, neither the description or on the content of that page. But when you visit the site, you can understand the relevance.
For me this is proof that sometimes they hand pick results. I dont think there is any way an algo would get something like this, and more if you see the other results on the same search.
Cheers!
Yes, but did you check their ibls and anchor text. That could be the answer.
frankly, I am pleased with yahoo. I rank well for all my targeted keyword phrases. The serps seem to be cleaner than in google, at least in my niche.
I like msn and yahoo much more than google. They let me build the site for the user and still get decent ranking. Google just wants you to build the site for them.
As far as what it looks like, usually it's pretty obvious by looking at the 1st pg b/c all the listings look similar. Y! usually uses their own directory descriptions so ther serps look very artificial. If they hand code a site that is NOT in their own directory, they manually give it a description.
ie. Widgets.com
offers a selection of widgets, widget gifts, and more
Category: Widget Retailers
www.widgets.com -
As far as the "BIG" picture, I think Y! will lose market share if they continue to expand this. We have seen over time that the best algorithmic results provide more value to the searcher than human edited results in a dynamic web. And time has proven that greater Search Value = greater Market Share = More Profits
It seems to me that the the evidence would look rather different depending on the direction of the suspected intervention, and it would be useful to know if there is a consensus on the direction of intervention.
In turn, this might provide some clues about motivation. For instance, a downward push/removal might be an attempt to solve a perceived "spam" problem without resorting to a "sandbox" type solution. In contrast, an upward push/insertion would more more likely to be motivated by a desire to make sure the SERPs include certain sites, or types of sites, which might be "expected" by certain users, despite an inability to get the algos to produce the "expected" result.
This also has some relevance to the question of whether this will tend to "freeze" the SERPs. Insertions will tend to be static, making it much harder for other sites to make it onto the front page via the algo-driven results. In contrast, removals will have a much shorter-lasting impact. That particular site may be permanently banned from the first page, but analogous sites (e.g. perceived spam) will work their way back onto the page, unless they change the algos, or frequently redo the hand weeding".
Glaringly obvious.
>>>old wives tale
We are usually in agreement MB, but I gotta disagree on this one. We were tracking those H tags for a while before they were being publicly scrutinized and I never saw a spam site with an H1, saw a very very occassional one with an H2 and the rest with H0... course, then the tags became meaningless. I gotta agree to disagree. ;)
[Hand-coded SERPS are] Glaringly obvious.
Not to me. Would anyone care to explain how to tell a hand-coded result to those of us who aren't as far along in the SEO game? So far nobody has done that, and therefore I'm inclined to agree with martinibuster, who said an agreement probably couldn't even be reached as to what a hand-coded result looks like. And if that's the case, how can anyone be sure results are being hand-coded in the first place?
I want to learn. Please enlighten me and share some definitions, evidences, or things to look for.
Sure. In the past, a classic hand coded serp was dominated by directory listings in the the first 5 or so results. These were for the most part one word terms that are usually problem areas where spam is rampant. These results were pretty much static and rarely if ever moved, even through multiple updates.
The phenomenom that some of us are seeing now share the same attributes as what we have seen in the past, however there seems to be some authority sites in the mix that arent restricted to purely sites that are in the Y! directory.
Again, these are sites in problem areas that are usually dominated by heavily seo'd sites.
Example Query 1) Widgets
1) Directory site
2) .gov
3) directory site
4) .edu
5) directory site
results 5-10 the normal spam you would expect to see in a serp like this.
Example Query 2) Blue widgets
1) spam
2) spam
3) spam
4) so so site
5) spam
and so on and so on.
It's VERY unusual imho that Y! can arrive at the first example algorithmically, yet be unable to show the same type of ability to pick authority in the second example ( a very closely related query)
So in a nutshell, thats what some of are seeing.
Thanks for the explanation. Although I don't know that I have anything of value to add to the discussion, at least I understand the potential problems with this a lot better now.
Along with a few potential benefits...psst! Yahoo! Would you care to hand-code a search term or two for me? I'll make it worth your while! ;)
Heavy underversified inbound linking generates the effect you describe above when optimizing for specific multi word phrases. I've done plenty of it and have acheived great results on Y in the past and it's common to rank well for "widget" but not "widgets" - I think because Yahoo is not as good as Google at interpreting synonyms and devaluting heavy inbound linking.
iblaine, you make good points, but I think your examples are different from what pmac just outlined. When the fundamental nature of the two result sets is different, as is clearly laid out in pmac's msg#54, that is almost certainly manual intervention, not just kw/algo issues.
Like you, I have sites in Y where a plural does well but a singular form does not. However in those cases the nature of the sites coming up across the two searches doesn't vary that much. Different results, yes; different kinds of sites, not really.
In pmac's 'blue widgets' search, the algo is running normally and the spammier sites show up.
But in pmac's 'widget' search, the nature of the SERP's transforms into directory and .gov/.edu sites (the latter especially not being very well optimized), there is almost no other plausible explanation besides hand edits. 'Widget' and 'blue widget' searches should/can be expected to bring up generally similar kinds of sites overall, even if the specific results are different.
I see plenty of cases where differences in related searches are probably algorythmic, and plenty of cases where new sites are showing up that shouldn't be.
In these cases, on related searches (e.g., widgets, online widgets, widget information, etc.), not all the same sites are coming up for all related searches, but the kinds of sites coming up don't change that much. The (newly ranking) directory and mega sites that are appearing may appear for both 'widgets' and 'blue widgets', but in different orders, plus or minus a few sites. There are also varying degrees of SEO'd sites in there. Conclusion: I think that the current algo is too heavily favoring mega sites (or, some mega sites have been given some sort of overall ranking lift). But these are not hand edited.
pmac OTOH has clarified of the kind of result sets he and mfishy (among others) are alluding to. Their examples are quite different. When the result sets vary in nature that much, it is pretty obviously hand edited.
Exactly. When the top 8 sites for a serp are all fortune 500 companies, with no anchor, flash sites and nothing but "big brand names" who don't rank in Google or MSN, followed by the "regular players" you always see, you know there's an issue.
Am I saying this hard coding makes the results look bad or irrelevant... no. The results are usually quite good if you are surfing the serp for the first time. But, if you are a time and time again user, they get stale and who wants to use a stale engine. I want to know I'm not missing the "new good" because they are trying to keep out the "bad".
That, and a search engine is algo based - if Y wants to become a directory again, by all means, but let me know that it is what I'm using when I'm searching. ;)