>What isn't in the patent... a method of discerning niche authority
Would this need to be in? Once a set of results for a keyword/phrase has been found, the 'authority' sites could then be identified, resulting in the 'niche authority' for that search.
|The big story is the suggestion that high CTR in adwords can influence search engine ranking. Am I dreaming, or weren't there some "over my dead body" posts from google representatives concerning linkage between adwords and search engine ranking? |
I've seen claims of that nature by various google entities - and I'm still prone to believe them.
just because they've patented it doesn't mean they're doing it.
I also suspect there's a lot of stuff they -are- doing that's not in that doc.
I think that was Adsense CTR if you use it on your site. Not Adwords on Google's pages.
The entry about spikiness gives it away. They do not say how they will differentiate the spam form the genuine article.
There would seem little point in patenting the method then.
|So Google is basing SERP rankings on the Toolbar? |
I think the patent relates more to the Google browser/OS/Application that they're rumoured to be producing.
I think the patent describes certain features of a search application as well as a search engine.
If Google had a browser of their own, most of this user tracking would be very simple.
I hate the idea of registering for 1oo years!
Any thoughts about those that cannot spend $100 ++ only for registering.
Plus sometimes when you try an idea do you register always for many years?
I always experiment and give myself a year or two before reaching a final decision.
So a good trial might be thrown away from the SE due to not registering for many years ahead of time
Important for us AMs, Google in 2003 became affiliated with Amazon.com
"The multiyear agreement will make Google's search technology and targeted sponsored links available on Amazon.com within the next several months. In fact, sponsored links are already available on a selection of Amazon.com Web pages."
I have a lot of Amazon links on my affiliate gifts site, but you can believe it, I am adding a TON more.
I contacted my host about paying for several years for my domain name.
Just when I was thinking my AM site would never "make it"...maybe it will now...
|I contacted my host about paying for several years for my domain name. |
I would be cautious. It may give the opposite effect. We first should figure out *how* Google will use its claims (if it will).
For example, there is no much sense for a regular site to register domain for many years in advance. However, it has sense for the domain for sale, if the domain had easy remembered and good name. It is clear sign that you should pay to get it.
So Google might begin to consider your 100 year in advance registered site as the site with the content that is going to vanish very soon.
As far as I can understand from the Google recommendation to webmasters any optimization that the user cannot benefit from is considered as spam.
The truth is if webmasters concentrate on what benefits and better targets the consumers/visitors then you will rank fair. Google knows this and this confirms that. Of course you need to create sites for robots as well as visitors but not trickery or focus/emphasis on the SE, concentration should be on the visitors.
But of course SEO isn't about ranking...its about ranking fast, because anyone can rank the slow and easy way. ;)
This is getting ridiculous. I'm going to keep on doing what I've done for ten years - make my web site as easy to use as possible - and let Google do whatever it does.
Otherwise I'll be running around in circles like a headless chook.
I agree Danny. Make a good website and it will become popular. SEO should only be in the back of your mind while making a good website is in the front of your mind.
... and that's probably exactly the effect they were hoping this patent application would have ;)
google toolbar with page rank option turned on is enough for G to know how long you spend there and if you go to Amazon from there.
They have got all the bases covered.
Not quite. The google tool bar has not been designed for the Mac yet and that leaves out a lot of web designers and particularly graphic designers.
Re the release date just before April Fools--isn't it the patent office that controls when info is posted and not the applicant?
Consider the alexa toolbar using sample data to determine traffic rank for a website.
Check out WebmasterWorld: alexa.com/data/details/main?q=&url=http://www.webmasterworld.com or your own site.
Consider also that alexa is amazon.
Biggest surprise in all this is how this
<altered in the name of "good taste">
|That must be why 'br*tney sp*ars n*de' pages are all over the first page for many terms that have nothing to do with her or nudity. |
got past Brett's "snip fingers" or the "auto censor"..
Which knobs and buttons they push has always depended on variables in your site and it's category ..so what's new ?
example :Hidden text is the easiest thing in the world to algo out ..but they only do it ( penalize )sometimes ...we all know pages where its there ..so you'll just drive yourselves crazy trying to work out at what point the wires are tripped for any or none of this stuff ..
Game was always on ..
And anyones toolbar was always spyware ...
|For instance, search engine 125 may monitor whether physically correct address information exists over a period of time, whether contact information for the domain changes relatively often, whether there is a relatively high number of changes between different name servers and hosting companies, etc |
Four months ago we changed our company's name; two weeks ago our hosting company was bought by a big player, so our DNS will change soon.
Should our website be punished for these reasons?
|0088] According to an implementation consistent with the principles of the invention, information relating to traffic associated with a document over time may be used to generate (or alter) a score associated with the document. For example, search engine 125 may monitor the time-varying characteristics of traffic to, or other "use" of, a document by one or more users. A large reduction in traffic may indicate that a document may be stale (e.g., no longer be updated or may be superseded by another document). |
Could this explain the recent achievment of URCHIN?
Having spent the weekend re-reading and reflecting, I return to the point someone made earlier (can't find it to reference it - I know it's there somewhere): the processing requirements make these claims near impossible to implement wholly.
The conclusion from reading the original Hilltop document was that you would need a two-step process to rank the pages. Doing that on Google's 8 yards of a document universe would be impossible. This is true not least because G's response time is possibly it's most competitive factor.
That is no less true with these claims. While many relate to static valuations of a document - such as inception or discovery date - and could be built into to a constant in the ranking algo, others are query-dependant (like the age of anchor text, where query terms are included) and would need to be calculated on the hop. Even with massive improvements in processing power, that would be impossible without a decrease in performance.
So if these claims are being implemented to any degree, it's probably over a small subset of "money terms".
That's probably little comfort to many of the people that participate in these pages. Nonetheless, given that many of these claims are either a) conflicting with other search objectives or b) mad, it shouldn't be difficult to extract a short workable list points to be drawn from the patent, such as:
1. watch for linking "spikiness" (what a word!). Build links constantly and steadily. Plan long-term.
2. determine whether your target phrases are better served by stale pages or new pages (how? testing?), then act accordingly.
3. use hosting and nameservers perceived as quality.
4. bookmark your pages
5. hide affiliate links that are perceived as poor quality. Show those perceived as quality. (I imagine we're talking about more than just Amazon)
6. If you use the G toolbar or allow cookies, be aware that you're being watched (we knew that anyway!)
>> processing requirements
Not all processing needs to be done on-the-fly, some can be prepared beforehand. Your example with a table of the different variations of anchor text could easily be prepared as a batch job.
|Your example with a table of the different variations of anchor text could easily be prepared as a batch job. |
But for what % of search terms? And for terms outside the top, say, 1%, ordered by commercial value, how often could you run the batch?
We're talking about over 8 billion documents. And I've heard the statistic of 50% of search terms being unique. (which, curiously enough, are the ones I target ;) )
Thanks SlyOldDog, Elguiri and Claus, for the most incisive comments and conclusions.
I particularly liked your five types or rating/ranking theory Claus.
I may try and struggle through the whole patent, but feel much less need now.
>> But for what % of search terms?
I did not mean search terms. For 100% of anchor text terms. It could run.. well, i don't know... perhaps every time any page was indexed: If link found, then add anchor text + URL to the table.
If you then wanted to do something at search time, then it will be a simple lookup in a table, no expensive calculation needed.
>> 8 billion documents
I personally suspect that at least 10% of those are not documents as such. Still, even for the part that are real documents - it's a lot of documents. But then again, Google has got a whole lot of capacity, and they increase it continually.
re:"What bothers me:
1. You can buy sitewide links with totally unrelated terms and your competitor is toast. Someone could it to you. If you read this, links and anchor text are still the king, either directly or inderectly they make about 90% of the ranking. They can make or break you or your competitor. The prices will defintely go down now, so it's not expensive to nuke your competitor."
Are you saying that sitewide links are advisable, or inadvisable? I'm just wondering because I've seen a couple sites go to the top of the MSN serps using site-wide text links purchased from other sites. These same sites don't do well at all in google and yahoo though, which makes me wonder if google and yahoo can counteract this attempt to artificially boost position (game the system) a lot better than msn can.
Google has always inferred that there is no way your competitors can harm your ranking. If getting too many links to quick can hurt someone's ranking, then we could all add our competitors to every page in several 5000-page websites and squash our competitors off of the search engines.
It will never happen. I honestly believe that most of the verbiage in this patent is old data that Google has been writing for years. Their way around too many links too quick is called "the sandbox".
What is the patent number?
Many (most?) of the posts in this thread are implicitly from the view that Google has "a" algorithm, that human beings are putting individual variables into a complex function and then trying to tune those parameters to get good SERPs.
You can see this implicit point of view in everything from comments like "that's too many variables -- this just must be a laundry list of things they might like" to the more refined "too many control variables leads to instability, so I doubt this is worth reading".
However, the Google founders are coming from a data mining background, and while there almost certainly is some formulaic aspect to how Google calculates SERPs, it is increasingly unfruitful to try to understand Google's SERPs without understanding data mining.
From the point of view of data mining, there is no great problem adding more variables (certainly up to hundreds) so long as you have computing power (which Google can manage). From the point of view of data mining, there is no "algorithm", at least not in the sense that any human being understands how a particular SERP ranking was calculated.
It's better to think of data mining as a giant black box. You pour in variables you think might be relevant in the top, you give the machine a sample set of pages and how they *should* be ranked, then you let that baby grind away. Data mining machines can learn how to find incredibly complex associations all on their own -- they just need the horsepower to grind away. In this case, the associations are between an ever-growing number of variables that Google engineers can supply, and a sort order that a real human agrees makes sense. IOW, this is more a process of "training" than "tuning", which is why there's no particular problem of instability that results from throwing in a few dozen more variables.
Just as a computer can beat you at chess by brute-force trying every possible move and exploring its implications (though, in practice, they take some shortcuts to shrink the solution space to a manageable size), data mining can devise a very good and complex algorithm that produces the desired results by brute-force trying all combinations of the input variables to see what works (though, in practice, they take some shortcuts to shrink the solution space to a manageable size).
Once the input variable list includes things like "does this page contain words about topic X" (e.g., real estate, travel, sex, etc.), the resultant behavior looks nothing like a simple formula, and can easily explain all the imagined cases of Google engineers personally tweaking knobs or manually altering behavior for specific types of websites or topic areas.
Can you still game Google's algorithm? Sure. But it's a good bet it will continue (as it already has) getting harder and harder to game, requiring constant attention. As Google gets better and better at recognizing good content, the dreaded alternative of simply incrementally building a website with good content over time becomes more attractive.
In the past, it was probably possible to actually use data mining yourself to get pretty close to Google's algorithm, particularly for small sets of keywords. However, as Google incorporates more variables that only they can calculate (such as historical analysis of page rank -- you probably haven't been storing a copy of the top 500 websites for each of your favorite keywords for the last 5 years that you can analyze), that too becomes more difficult.
In general, people that think Google is incredibly smart have underestimated how easily their algorithm can be gamed. But we're getting to the tipping point now where more people who work at gaming Google are underestimating how difficult they can make it, and this is largely a lack of understanding of how data mining can make a hugely complex formula easy to construct and manage (without any human being ever having to understand that formula). You can track this sea change by graphing the percentage of SEO posts of the form "but I did all the stuff I'm supposed to and still don't rank well".
What Google can do with pixels, SEO games with a broad brush and a tin of vinyl matt.
Great post ronburk
Its nice to see a well written and new spin on things. I always consider that Google's algo was simpler than we sometimes think but this explains how baffling it can be.
Most excellent post, ronburk.
IMHO google went black box with the Florida Update, nearly 17 months ago, [black box: A device or theoretical construct with known or specified performance characteristics but unknown or unspecified constituents and means of operation], and as such is impossible to reverse engineer.
The way I see them making the ideas in this new patent work is that they would take a set of known non-spammy sites and a set of spammy sites and for each of the parameters listed in this patent application, do some preliminary statistical analysis of the sites for some tail chopping and/or for tagging. Then they would run whatís left, with a high-degree of manual review, to get a good training set, and then it would be off to the number crunching races.
We do a lot of note comparisons here at Webmaster World. While Iíve learned a great deal here there is also a lot of confusion. I have sites in several industries, but they are not representative of all sites, nor of the web as a whole. Many others here have a site or two in some special niche, while others occupy virtually every industry thatís in the money. We all view the feedback we get here through the eyes of our site(s), trying to put forth generic information and glean what we can from others, but nearly always as to how different things effect our own sites. I see many people say things like ďcontent is king, all you need is the best content in your niche and people will link to youĒ. That may be true for your particular niche, but applying it to all sites just does not extrapolate. Other sites can benefit from a large amount of SE spam, while the same amount of spam on a different site might get it banned immediately.
What it all comes down to is that google is getting better and will continue to get better at detecting spam and removing those sites that violate itís guidelines from the SERPs, by hook or crook. This patent application gives us a pretty good idea of what they are looking at or intend to look at to do this. Those industries that are the most spammy will likely be the hardest hit. Other industries may be little effected, which will only add to the confusion.
The nicest thing I got out of reading the application is that google is still all about linking.
As outlined in msg #96 i think that there are query specific issues ("ranking") and non query specific issues ("rating").
There's some kind of limit to how many ranking parameters you would like to have at query time, but there's no limit to rating parameters as these can be calculated any time.
(just another way of looking at "data mining", essentially)
Regarding "Stickiness". How will G measure this? Crawling logs or?
so do adwords count for the ctr
| This 189 message thread spans 7 pages: < < 189 ( 1 2 3 4 5  7 ) > > |