homepage Welcome to WebmasterWorld Guest from 54.204.182.118
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Home / Forums Index / Marketing and Biz Dev / Link Development
Forum Library, Charter, Moderators: martinibuster

Link Development Forum

    
Identifying excessively reciprocal links among web entities
United States Patent Application 20090013033
pageoneresults

WebmasterWorld Senior Member pageoneresults us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3824604 posted 4:12 pm on Jan 12, 2009 (gmt 0)

United States Patent Application 20090013033 [appft1.uspto.gov]

Abstract
A method for identifying reciprocal links is provided. At a particular host, the set of hosts which link to the particular host and the set of hosts to which the particular host links are determined. The intersection and union of the two sets of hosts are also determined, and the sizes of the intersection and union are calculated. The concentration of reciprocal links at the particular host is calculated based on the sizes of the intersection and union. A ratio of the intersection size to the union size is used to determine the concentration of reciprocal links. The particular host's rank in a list of ranked search results may be changed as a result of identification of a high concentration of reciprocal links.

 

Shaddows

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3824604 posted 6:04 pm on Jan 12, 2009 (gmt 0)

A succint overview of the war between SEO and SEs is contained therein:


[0005]For example, if a search engine ranks a web page based on the value of some attribute of the web page, then the web page's author may seek to alter the value of that attribute of the web page manually so that the value becomes unnaturally inflated. For example, a web page author might fill his web page with hidden metadata that contains words that are often searched for, but which have little or nothing to do with the actual visual content of the web page. In another example, a web page author adds to a web page many incoming hyperlinks, also called inlinks, based on the observation that web pages more frequently referenced by other web pages are generally considered by search engines as being of higher relevance. One method used by web page authors to increase the number of inlinks in web pages is to create web pages with "reciprocal links," where two web pages both link to each other, resulting in an increased number of inlinks for each reciprocally linked web page.

[0006]When web page authors engage in these tactics, the perceived effectiveness of the search engine is reduced. Spurious references to web pages which are not useful for users and are meant to boost search rankings sometimes push poorer results above web pages that users have previously found interesting or valuable for legitimate reasons. Thus, it is in the interests of those who maintain the search engine to "weed out," from search results, references to web pages that are known to have been artificially manipulated in the manner discussed above.

[0007]Therefore, there is a need for an automated way of identifying web pages that are likely to have been manipulated in a manner that artificially inflates the rankings of those web pages within lists of search results. Specifically, there is a need for an automated way of identifying web pages whose numbers of inlinks have been artificially boosted by reciprocal links in an effort to achieve higher rankings for those web pages.

(Description, para 0005-0007)

Seems like a sensible approach to me, but at a casual (very quick) reading, you only have to increase your non-recip to recip ratio to avoid tripping the filter

cnvi

10+ Year Member



 
Msg#: 3824604 posted 10:22 pm on Jan 13, 2009 (gmt 0)

I own Registered Patent #7,082,470 [patft.uspto.gov] in a similar space so I wanted to comment on what I see in this application.

The application appears to be written by attorneys and not experts with knowledge and history of link exchange, reciprocal linking, and SEO.

As near as I can understand this it contains good news and bad news. The bad news starts with the title "Identifying excessively reciprocal links among web entities" which implies all reciprocal links are somehow bad.

The negatives continue until about the last third of the doc where they admit that relevant links are good and legitimate and talk about programming an algorithm to specifically identify good linking pages.

From my perspective I would say the most positive thing here is that if webmasters follow the same recommendations I have been preaching here at WebmasterWorld for the past few years regarding the practice of obtaining relevant links through reciprocation in slow/natural volume, they will meet Yahoo!'s positive "degree of confidence" requirements based on the statement that "the closer the value of an entity's function of interest is to the specified threshold for that function of interest, the less uncertainty there is that the value has been artificially inflated."

Also, the emphasis on punishing hosts and domains that specialize in helping spammers implement their schemes is a good argument against all the full duplex automated systems offering irrelevant links in high volume through reciprocation, likewise the fact that they have an alleged way to hunt out and punish sites buying into three- and four-way schemes.

The really bad part is that there's no verbiage validating the essential nature of valid one-way linking and relevant reciprocal linking to the overall functionality of the web. Maybe they do not include this because they don’t want to give away what really works. That has always been Google’s method for writing patent apps.

What this patent application says right at the beginning (almost in a throwaway line) is that Yahoo considers most metatags as spam. Than it goes on to say that Yahoo considers most links spam. It also implies that if you have proper metatags and links you won't be punished for them.

But nothing is said about rewards. Which raises the question, if you don't base part of your ranking decision on things like keywords or links, how do you rank pages?

If you rank them only on, in Yahoo's words, a page's “visual content” you've created a system that is so incredibly easy to spam it's ridiculous. Visit the top four or five pages for your search term, download their content and combine the best parts of each to create your content. All you need to do is change some of the vocabulary and sentence structure to make sure it won't throw up a "duplicate" flag.

One side note, if Yahoo really implements this, their returns are going to start becoming vastly different from Google's since Google still counts positive relevant links obtained through reciprocation.

Such a strategy will probably push Yahoo further into the margins than it already is.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Marketing and Biz Dev / Link Development
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved