Forum Moderators: open
Note: the original url is no longer online, but I've
edited the link to point it to a reprint of the original
[edited by: tedster at 7:11 am (utc) on May 10, 2007]
Lots of folks have posted over the past couple months thinking that ONE thing is going on here. An algorithm is many things. A page can easily come into the top ten without having any value in one algo area, like here. Sheer volume of anchor text can overwhelm everything else. In uncompetitive areas the only page on the Internet with the query in the page title will often/usually beat pages with the query on the page. There is no one thing at play here. There are many. Some are more important than others, but the others do exist and can be the deciding factor sometimes.
And, I really wouldn't consider it appropriate for Google to edit SERPs based on political issues. The reason why Bush is #1 for "miserable failure" is a lot of people hate the guy. If that is their opinion, so be it, and Google shouldn't interfere. By the same token, if lots of Bush supporters link to his biography with "great leader", and it comes up #1 on Google for that search, Google shouldn't tamper with that SERP either.
That tilde search works odd. I just checked that using a 3 letter acrostic that is a shorthand term for the generic name of a certain pharmaceutical. Using that 3 letter acrostic, the generic name is also highlighted. However, a search on the full generic name doesn't show the acrostic highlighted. And, if you search "CIA" and "Central Intelligence Agency" using the tilde, in neither case is the other highlighted.
I don't, however, know if something was done differently with the advertising on that page somehow (typo in the ad code or something). The difference just struck me after what I saw other people saying about ~cityname -cityname failing, and so on. (-:
It would be useful to get feedback from other webmasters' experiences with subtle linguistic changes they have implemented and how it has affected the SERPs.
annej has posted [webmasterworld.com] a result here after just a few days - Maybe too close to say for certainty what caused the rise though yet.
Hissingsid:
The thing that makes me think that the term is thrown out is two fold. 1. It makes logical sense since we can assume that every page found using the old algo either has the term somewhere in the text ...
I agree; "expected" KW's are still given weight, which gets "lighter" the more common they are (and discarded), the distillation process still "revolves" around these words as they "scope" out for further unique words - the unique words sort of "affirm" a subject and offer related docs to the search query.
Difficult, without the proper thinkin time, to put into words but essentially, once you have the search query from the user, one really only needs to FIRST collect all the docs that contain that KW, once that's done you can forget about the word almost, and spread your tentacles out to find the next most important "expected" word and so on... each step renders less value to the last as one get's "hotter", or rather, the subject matter - rather than the keyword "matter" - becomes more relevant.
Makes sense to me, especially on keyphrase, or multi-word search queries.
Lots of folks have posted over the past couple months thinking that ONE thing is going on here. An algorithm is many things. A page can easily come into the top ten without having any value in one algo area, like here. Sheer volume of anchor text can overwhelm everything else. In uncompetitive areas the only page on the Internet with the query in the page title will often/usually beat pages with the query on the page. There is no one thing at play here. There are many. Some are more important than others, but the others do exist and can be the deciding factor sometimes.
Exactly. And, Google and any other search engine using this is going to have priorities. Wigets gets millions of searches, so the algorithm here is very complex. Tung is very specialized, so it's not so sophisticated.
About the only thing you can do--but it's a lot--is study what pages are getting high listing on your search terms. Google tries to create a level playing field focused on the reader. Now, how does their software view this goal? Once upon a time, it was lots of people linking to the site--and that's a factor, still. Now, it's also complete sentences, perhaps? Key phrases?
If Google is a public company, they are going to be focusing on areas that make their stockholders the most money. That's OK--commercial speech has real value in a social sense. But, those areas with a lot of traffic/high bids could have more complex algorithms (I am not implying unfair or not reader focused at all) than other, less well traveled areas.
What I find very interesting also is the combination of the synonym tool with the -exclusion operator. '~city -city' and '~state -state' return no results at all meaning there are no tokens for these words... which makes sense. Try any keyword that someone would or more likely already has bought an adwords ad for... Use the '~keyword -keyword' and the result set will highlight synonyms. Scroll through 200-300 of serps though and you will find just 3 to 5 total stemmed synonyms on average. It would appear that either this method is not telling us everything we don't know about the synonym/token list or... maybe it is and maybe that is the problem. I don't assume this semantic factor has replaced the old algo, just that this could now determine the damping factor for the basic old algo.
You apparently haven't done this search in the past days.
The results are nowhere near anything resembling "old results". The results favor sites that are titled using the word "a". In other words, they are completely different than anything else.
"Since the Florida update it has been widely assumed that some ontology filter was being applied to the regular result set"
No one paying attention has assumed this. The first thing a person needs to know about Florida/Austin as this statement is wholly untrue. This is a new algorithm, a new ranking system not related in any key way to the old.
Then there's the question of if/how all of the pages within a given site are related to each other in this regard...especially wrt the homepage. ;-)
Hi Caveman,
Could you expand on that thinking a bit.
Many of us noticed that pages that ranked highest and were index.html at the domain root dropped the furthest. The SERPs filled up with inner pages from directories and from larger sites on the broad subject with thos specific pages touching on the specific topic of the search.
In my SERPS In the top 50 there are only a couple of root pages listed and both of thos are supported by an inset listing of a page on the specific topic almost as though general root page supported by specific inner page has weight.
Best wishes
Sid
No one paying attention has assumed this. The first thing a person needs to know about Florida/Austin as this statement is wholly untrue. This is a new algorithm, a new ranking system not related in any key way to the old.
Hi Steve etal,
The way I see it is an algorithm is built up of components. There are many components in the new algorithm that are the same as the old algorithm, really there would have to be because there are only so many variables on the page and in linking structures that can be analysed. The things that can be analysed remain the same the way that they are analysed has changed.
The contribution of each component can be increased or decreased and some things added and others taken away. It seems to me clear that some of the key components of the old algo are still definitely there, anchortext, PR etc and something else has been added. CIRCA being a special kind of LSI is in my view the #1 candidate for this new component. Everything else that Google is playing with since it aquired Applied Semantics points to this. Expert opinion suggests that LSI would be incredibly inefficient be applied to a corpus of 3.3 billion with regular updates because of the complexity of the calculations required.
The smart thing to do would be to apply it to smaller samples.
So how do you get smaller samples? Well you get results sets from Googles index.
How do you decide which results sets to create to apply the analysis to? You compile a list of the most frequently used search terms.
And what do you do when you want to expand that compiled list? You expand it to related frequently used terms. (Sound like Austin anybody?)
This addition of LSI/CIRCA to the existing algo would look different enough for it to appear like a whole new algo without changing what Google previously had. If you accept this 2 stage process this explains why some terms were affected and some were not and why more were added at Austin. If they are not doing this then they need to get the thing patented quick and start using it because it is killer search technology.
Best wishes
Sid
I don't remember if anyone mentioned it here, but searching for ~travel -travel does not affect the AdWords.I think it's an important fact but I cannot find the exact reason why it is so.
Hi Hannan,
Adwords can/does use broad match to choose the ads to present. Since the terms returned are the closest possible match to the ~term -term then it makes sense that these will be perfect brad matches. In fact maybe that's what they mean by broad match.
Best wishes
Sid
"This addition of LSI/CIRCA to the existing algo would look different enough for it to appear like a whole new algo without changing what Google previously had."
Exactly, I think alot of folks have really made more of this than it is. It's a "candy coated" filter of sorts applied on top of the old google. That's why everyone in academia is still enamored with google results... they are for the most part the same. Yes they changed a bit as would be normal with two old updates, but they didn't throw out the old. Also, it's why everyone... well practically everyone who optimizes for commercial purposes in competitive areas is in upheval.
There is a thread in Keyword Discussions that started in March of 2001 on Stemming and keyword "families"
on the future of stemming, LSI and the use of categories.
It's worth a read. It talks about a somewhat different approach to the same technoloy. Also has links to some good articles.
[webmasterworld.com...]
annej has posted a result here after just a few days - Maybe too close to say for certainty what caused the rise though yet.
I hadn't made any other changes for months other than changing my new articles each month on the upper left hand side. I have been sitting on #11 in the serps when searching the word 'widgeting' for months. Since sometime after Dominic as I remember. I had crept up to #8 a few weeks ago, and then to #7 and today it's at #5. I've got to be suspicious that it's the changes I made.
All I did is include some of related words that I found using ~widgeting. I didn't add a lot of words, just looked at where I could change words without changing meaning. Like using widget making instead of widgeting and widgetwork instead of widget.
It looks to me like Google is finding pages with more depth on the topic this way. It will be interesting to see if my new results last. I'm curious to see if this works for other people as well.
if you've done this won't you rank well regardless of LSI/non-LSI ranking algorithms?
In fact, they fail to mention that the larger the data set, the more imperfect and unpredictable the results. Consider trying to plot the relationships between 100 entities on a computer screen. You have 4,950 unique pairs with 100 entities. You need n-1 or 99 dimensions to plot it perfectly.
Don't believe me? Plot three points on two dimensions, where all three points are equidistant. No problem -- you have three points of an equilateral triangle. Now add a fourth point, such that all four points are equidistant. You cannot do it on two dimensions. The fourth point has to be placed behind or in front of the screen. You now need three dimensions, or n-1.
When was the last time you were able to visualize even four dimensions? Our brains don't do very well in this area. You can set up a matrix chart with numbers and show all the data, but trying to reduce it means that you have to start cutting lots of corners.
MDS is frequently illustrated by scholars by taking a road map chart that shows the driving distance between cities. They take these numbers and do an MDS plot. The map comes out pretty good, although it might be a mirror-image or upside down.
What they don't tell you is that the reason it comes out okay is that the map was a mere two-dimensional situation to begin with. It's a stacked deck! In the real world, MDS will almost always be trying to plot more than two dimensions. The whole thing goes downhill very rapidly with MDS, at the same time as the crunching required goes up exponentially.
I use MDS as an example, because I have extensive experience with it from plotting 100 points on a screen. But it's the same thing with all of these fancy techniques. You have a very complex, multi-dimensional problem, and you have to reduce it to the top ranked results. Whether it's 10, or 20, or 100 in the top rank, I can assure you that the end product will leave you with that "filter feeling" we got from Florida and Austin.
Since the end product is so unsatisfactory in any case, and since the data sets involved are so vast, it makes sense to revert to something simpler, as opposed to throwing more sophisticated algorithms at it. I think Google will figure this out eventually. Good old word proximity, when your search terms involve more than one word, is pretty simple, has low overhead, and is probably more useful than all these fancy algorithms put together. I'm not saying that word proximity alone is sufficient, but I use it as an example because I believe that Google is using it less now than it used to.
If Google is using lots of fancy stuff, then Google has gone too far. You'll burn your brains out trying to optimize for it.
use pristine little data sets to illustrate their points. Such is the case with the paper cited at the beginning of this thread, which uses MDS to illustrate their technique.
Hi,
That's the point. I'm almost certain that Google is doing this in a stepped process.
Step 1. Select relatively small sample using algo 1. (something like the old Google algo for example)
Step 2. Run CIRCA indexing on it.
Step 3. Combine the results from step 1 and 2 and present in users browser.
The neat trick is selecting a smaller sample to work with. Thats why I think we always end up with a maximum of 1000 results in SERPs. Its a kind of blinding flash of the bleeding obvious.
Best wishes
Sid
Yeah, great content is, indeed, important. But, the only way you're going to know what G or other search engines think is great content is by looking at what they select. And, you might not agree with it.
Or, in some cases they'll use x and others y.
We really need some help, for example, with city, state searches. But, who is to say what is great content in this kind of search. Hotels often come up. Well, maybe that's what most are looking for? I don't know. Florists are big in many city, state searches. Weird? No, not really in the overall scope of things, if you have to pick just 10 things.
When you only have a couple of words, there is all kinds of room for misunderstandings. So, categorizations will eventually come into play, I predict.
And of course the maths can be extended into as many dimensions as required. But with dozens of potentially relevant words on each page, it always struck me that it was unlikely any search engine would have the processing power to analyse the billions of pages out there.
Google must be using shortcuts to do this. The subjective success or failure of the algo must depend upon the mix of shortcuts (word count/proximity etc.) to this more sophisticated stuff.
Perhaps they got this mix wrong with F and Austin. A reasonable precis Scarecrow?