Welcome to WebmasterWorld Guest from 18.104.22.168
1. Document Scoring Based on Traffic Associated with a Document [appft1.uspto.gov] [April 19, 2007 - Steve Lawrence]
2. Document Scoring Based on Query Analysis [appft1.uspto.gov] [April 19, 2007 - Jeffery Dean]
3. Document Scoring Based on Link-Based Criteria [appft1.uspto.gov] [April 26, 2007 - Anurag Acharya]
4. Document Scoring Based on Document Inception Date [appft1.uspto.gov] [April 26, 2007 - Matt Cutts]
But I agree tedster, the traffic one is most interesting. Looking more at that one right now.
The 2005 History and Age Data [webmasterworld.com] patent uncorked a lot of date-related stuff, link aging etc. So there's probably some juicy new stuff in these.
One thing that is interesting to me is how they are determining what constitutes an advertisement, which is not specified. In any event, it was intriguing to read, since just thist week we declined to run ads on one of our sites because we felt the advertiser was sub-par. Dunno if what's in this doc is or will go into actual practice, but it sorta supported the choice we made, to the extent that we made the choice honestly in part out of fear. Hate leaving that money on the table. But G seems to have concluded that if all your advertisers are low quality sites, then perhaps your site is as well.
Also, for those who actually read the doc, I personally would not assume that just because they note Amazon as a quality advertiser whose ads might appear on your site (perhaps even to your site's benefit), it does not mean that G's algo likes seeing sites where the majority of links out are to Amazon, or any other quality advertiser/affiliate merchant. Just a caution. Not a knock. :P
Another thing that intrigues me here is that this has the potential to further aggravate what I see as an already spiraling problem. Namely that webmasters have become so nervous about linking to smaller sites, or so often link to sites like wikipedia, to prove that they themselves are quality, that it's becoming impossible for small gem sites to rank. Everybody's afraid.
And now people are going to start accepting ads based in part on the same logic (as my own story above implies). G's compulsive focus on only links to and from big trusted authority sites, even now WRT advertisements, is very, very bad for the Web, and has already led us to a place where WAY too many listings high in the SERP's include wikipedia and a small number of other overly trusted, overly valued sites.
the traffic one is most interesting. Looking more at that one right now.
But after diving into it it looks like the patent is all about advertising and not any organic SEO. So, looks like the "traffic" part has to do with PPC, which makes sense: show ads from advertisers that have more traffic (so Google can make more money).
And that is only one small part of the full doc, even though it's the focus of the intro. A bit confusing in that respect. To me anyway.
Add the social bookmarking aspect to this: it would make sense that if a site gets a lot of traffic via social bookmarking then "real visitors" are saying that they like the document. So a search engine should reciprocate by ranking the document accordingly.
I'd be more appreciative of their achievements if they had not had such a profoundly negative effect on the Web overall, in the past few years. Sad, really.
Also, more than a bit hypocritical that they publically eschew commercial sites (i.e., in favor of informational sites), run adsense on clearly-spammy-and-entirely-commercial sites, and now regard ads on informational sites as quality indicators. Hehe. Talk abourt irony.
Hey, I'm thinking of placing a bunch of free ads promoting various G services on some of my sites, to get my rankings up. Just kidding. :p
2- the last methods describe everflux as I've observed it, and as discussed in a Tedster thread a few weeks ago:
24. The system of claim 23, further comprising: means for determining whether the document is authoritative; and means for bypassing the negative adjustment of the score when the document is determined to be authoritative.
Google has a habit of destroying the good things on the net.
Take for instance MFA sites. Google are the creators of this spam and yet still allow it to flourish.
The sooner google stops INOVATING on the net ... the better the net will become!
Social bookmarking is already well spamfested. Now they are advocating it
Google is like King Midas: everything he touched turned to gold. He thought this was great, until he hugged his daughter and turned her into a golden statue.
You don't want to be embraced by Google. You want to be invisible to them, lest they apply their golden touch.
 FIG. 2 is an exemplary diagram of a client or server entity (hereinafter called "client/server entity"), which may correspond to one or more of clients 110 and servers 120-140, according to an implementation consistent with the principles of the invention. The client/server entity may include a bus 210, a processor 220, a main memory 230, a read only memory (ROM) 240, a storage device 250, one or more input devices 260, one or more output devices 270, and a communication interface 280. Bus 210 may include one or more conductors that permit communication among the components of the client/server entity.
What a load of hot air.
--- skip this part, go down to "chronology" ---
The whys and hows that I can imagine.
Paid links and ads that link to cr@p:
The current problem is, paid links are generating some automated parameters and make the target document rank just because it receives n number of relevant, quality inbounds, while in fact it may be that the actual content has no value related to the targeted keyphrase. Or in other words, people who do a search for that phrase wouldn't be satisfied with what they find. Especially it it installs something on their computer. In AdSense there's little telling of what user satisfaction is after a click - for PPC anyways - unless they pair the data up with Analytics or whatever data they find lying around.
Paid links, ads that link to value:
The amount of references and amount of passed parameters that a specific, smaller site/page/business would need currently makes it impossible to break into Google even if the given content would be much more useful for the users, for it's local / more specific / newer, whatever. But exactly because of these qualities ( it is too specific to touch enough web-sensitive people ) it will not generate the amount of natural links needed to pass the threshold for trust, or simply not get pass relevancy calculations. Instead you'll see seven nonexistent Wikipedia pages. In AdSense the big players may easily outbid the locals with equally relevant pages just to keep them from being able to build momentum. Even if the global company offers nothing but a landing page that was very well optimized. While people would in fact love the local service, but... did not find it.
That's why most of the sites bought links in the first place, so if they invent something to counter these effects, that's like saving two birds from the same stone.
The new system would take note of the reference from the advertiser, and see if people were fond of the advertisement ( clicked it or not ) and/or the actual content ( hit 'back' within 10 seconds / did not browse further / closed browser or not ).
It may include paid links from the major players. Text link brokers, news sites, even paid directories. It would know of the relation, unless the links are hidden, but including the instances of a link passing no parameters.
It would further analyze the links, and where it doesn't look on relevancy, pagerank, trustrank ( as it does for links it finds "natural" ) it would look at the traffic a link generates.
--- Chronology ---
'nofollow' panic originating from MC blog.
Webmasters flag advertisements for Google with a nofollow.
Links that are clear as daylight to be paid ads and do not have nofollow are devalued, and flagged as ads. Practically speaking, Google applies that nofollow for you if your forgot to.
Links that aren't detected as paid text links continue to pass PR, relevancy and TrustRank ( if they aren't devalued by the ultra-strict thematic trust-relevance checks, see -950 thread )
Links that are now identified as paid ads get a new, traffic based ranking, just for them.
Ads that perform well and ads with high user satisfacton generate a scrore for probably both ends. Sites that are on-topic, ads that are on-topic get a higher quality ranking. Sites with certain patterns in their ad campaigns and sites with certain patterns in the ads they show will either see more or less of this score. Sites with only a few but effective ads on them, and sites with only a few but effective ads to them are probably better than having 500.000 sitewides on barely relevant pages.
Even without passing "natural" link related parameters text links, and other ads are integrated into the SERPs to offset the current balance, allowing smaller sites to perform better for specific searches.
Google users are more satisfied.
Text link ads are both devalued and keep their worth, but start working for their algo. Any kind of ad in fact will be working for them in regards of determining user satisfaction in the areas where most of the references HAVE to be ads, for the given area is so specific, or exaclty the opposite... it's that competitive.
Webmasters are better off than not having this in place.
SEO isn't really changed for this doesn't really realte to SEO, at least it doesn't provide new dimensions to it. It's rather the content that counts.
Everyone is happy.
And then I woke up.
...There are several factors that may affect the quality of the results generated by a search engine. For example, some web site producers use spamming techniques to artificially inflate their rank. Also, "stale" documents (i.e., those documents that have not been updated for a period of time and, thus, contain stale data) may be ranked higher than "fresher" documents (i.e., those documents that have been more recently updated and, thus, contain more recent data). In some particular contexts, the higher ranking stale documents degrade the search results.
 Thus, there remains a need to improve the quality of results generated by search engines.
"stale" documents... may be ranked higher than "fresher" documents
thanks for the info good thread
cookies when you do a search and go back to it
This is an interesting discussion. It sounds like Google tried to do the same thing with the ridiculous nofollow tag.
Yesterday Wikipedia decides not to use nofollow. Their outlinks mean a lot. Today they turn on nofollow, suddenly all the SERPS are changed radically. How arbitrary and stupid is that?
A system determines:
an extent to which advertisements are presented
or updated within a document,
a quality of an advertiser associated with an
advertisement provided within the document,
whether an advertisement in the document relates
to an advertising document that has more than a
threshold amount of traffic,
and/or an extent to which an advertisement provided
within the document generates user traffic to an
advertising document related to the advertisement.
[edited by: tedster at 7:15 pm (utc) on April 30, 2007]
 In summary, search engine 125 may generate (or alter) a score associated with a document based, at least in part, on information corresponding to individual or aggregate user behavior relating to the document over time.
The text of these four new patents seems particularly chaotic - ecah one contains sections that are about the other three patents' areas, and that text seems disconected --as if it was copy/pasted in. So the relationships between all four patents are not at all clear to me right now.
1. Traffic: scoring based on advertising on the page
2. Query Analysis: which search result gets the click
3. Link-Based: freshness, staleness and churn around links
4. Document Inception Date: methods for determing when a document was published
Does this all point to methods for an attack on paid links?