Welcome to WebmasterWorld Guest from 126.96.36.199
first post here, though I've been lurking around for a long time.
My site has jumped from a pagerank of 5 to 7 lately which I was quite happy about. BUT it doesn't really seem to mean much. When I search for my keywords I am still in the exact same position as before (31st, 11th, 5th - depending on keywords used) .
Has anyone seen a positive improvement in their SERP when their page rank increased?
Has page rank become meanless for you position in the SE results? Is it only helpfull for users.
A high PR means that you can change your page and see the results update within a day or two. Thus being able to test different things much more efficiently.
A high PR means new content will get indexed much faster
A high PR does count to the algo, in that inbound links still count. Not all it used to but it does count.
THE TOOLBAR PR IS MEANINGLESS
Your site moved from a PR 5 - 7. This did not happen overnight like the toolbar pr update. Toolbar PR gets updated infequently, actual pr gets updated constantly. Thus you likely had a PR of 7 for months but it has only just been reflected in the toolbar. Do not expect to see a rank change after the toolbar PR update, it just doesn't work that way.
I have to say though that I think the backlink stuff is madness - everyone, and I mean EVERYONE is on a mission at the moment to get 1000's of backlinks via all kinds of methods. For me a site that has 1000's of backlinks does'nt really say much about a site. My personal belief is that the backlink algo will go much the same direction as the PR.........
<So generally most people think it's fairly useless, and I shouldn't worry about it then....>
Here are my assumptions :-)
Google doesnīt give much weight to the PR you see on the toolbar. Lets say its on a standby status. But that doesnīt mean that Google is operating without a factor counting for backlinks in one way or the other. Its just that we havenīt been able to figure it out yet.
<My personal belief is that the backlink algo will go much the same direction as the PR.........>
I assume that Google will use in its evaluations the quality of backlinks not the number of them. All indications are pointing to the direction of quality not quantity!
In the past and with all things being equal in regards to on page factors, IBL's, etc. and provided Google's algo deemed 2 similar pages of equal import and value, a PR7 page would rank above a PR6 page.
That being said, if someone has an exceptionally well written PR3 page with better (in Google's estimation) on page factors (including outgoing links) than a competing PR5 or 6 page, it is often possible for the PR3 page to rank above the higher PR page.
However, PR is in transition and being merged with TR or TrustRank [dbpubs.stanford.edu]
If you haven't already read this paper which was written in March, 2004... do yourself a favour and do so right now, It will help you to better understand the direction in which Google and other search engines are headed.
In their conclusions, the authors Zoltan Gyongyi, Hector Garcia-Molina (both of Stanford University) and Jan Pedersen (of Yahoo! Inc.) wrote:
As the web grows in size and value, search engines play an increasingly critical role, allowing users to find information of interest. However, today's search engines are seriously threatened by malicious web spam that attempts to subvert the unbiased searching and ranking services provided by the engines. Search engines are today combating web spam with a variety of ad hoc, often proprietary techniques. We believe that our work is the first attempt at formalizing the problem and at introducing a comprehensive solution to assist in the detection of web spam. Our experimental results show that we can effectively identify a significant number of strongly reputable (non spam) pages.
TrustRank combined with PageRank may have already been implemented for upwards of a year now. I for one believe this to be the case and it could very well explain the so called "sandbox" and several other "seemingly unexplained" ranking problems some webmasters are experiencing on Google.
TrustRank focuses on IBL's from trusted as well as bad or not trusted "seed sites" humans have selected. The idea being that trusted sites will not intentionally link to bad pages (although they might do so occasionally) but that bad sites will and almost always do link to other bad pages in order to falsely inflate PR and manipulate search engine results.
The theory is that TR (a human assisted selection of trusted and untrusted web sites) when merged with PR (a computer generated algorithm based on link structure or "votes" from all web sites) will produce a qualitative set of search results.
The future of TrustRank merged with PageRank
In a perfect world and provided that TR combined with PR actually works, and further assuming that spammers are successfully thwarted by the combination of the two ... those sites which naturally accrue IBL's from trusted sites, combined with naturally acquired and themed IBL's from the WWW will establish their own TR over time. This TR will in turn be propagated to other sites they link to in the same way PR has been propagated in the past.
It will be or may already be more important than ever for webmasters to clean up their act and put an end to log spamming, link farms, blog spamming and other artificial means of attaining links for the sake of artificially inflating PR and attempting to manipulate search results.
There are those who will of course continue to try to beat any system the search engines put in place, but unless they are smarter than those employed by the search engines, they will lose in the end.
That being said, it is also becoming increasingly important that webmasters learn to write for the web and assist the search engines to properly identify what your page is about.
Title tags, page descriptions, on page content, anchor text and outgoing links are more important now than ever before.
The "Honeypot" Theory
What I like best about the whole TrustRank concept is that even those pesky scraper sites can serve a purpose in identifying "trustworthy" sites despite the fact that their own site may be and likely will be identified as "bad". The authors of this paper in their section titled "Assessing Trust" wrote:
Note that the converse to approximate isolation does not necessarily hold: spam pages can, and in fact often do link to good pages. For instance, creators of spam pages point to important good pages either to create a honey pot, or hoping that many good outlinks would boost their hub-score-based ranking
Personally, I am thrilled that TR and PR have at last been combined. It was a long time coming, but it is finally here! Quality sites will float to the top and this is as it should be! Fly-by-night operators who churn out hundreds of computer generated web sites, scraper sites propelled by greed and just plain crappy sites will soon be a thing of the past.
Its about bloody time!
What TrustRank and PageRank do not address
Although this new algorithm is a huge step in the right direction, it does not address other "tricks" routinely employed by search engine spammers.
Let's hope the SE's are soon able to thwart these abuses as well. They still have a long way to go ... but TR is a good start.
I find trustrank to be weird/cr*p maths.
They have defined points that have been hand flagged as trusted by someone acting as an Oracle*. Having got these points, the ONLY points that are truely known, they then propagate those through the web using a pagerank system.
The process of pagerank replaces the know points with each iterative result of the 'page/trust' rank calculation.
So the only truely know values in the system they throw away and replaced with the result of an algorithm that doesn't know better than the Oracle.
What they are after is a weighted average based on the distance from a known spam page. This is what the text says they want, but what the maths does is different.
What they've got is similar to pagerank but with more limited seed values.
In pagerank, every page starts at X, in trust rank only seed page starts at X.
So they're simply doing pagerank with a strong bias to reviewed/trusted sites.
So despite their discussions about spam and trust, all this trust rank does it to use Pagerank but with a bias to a smaller set of trusted sites.
There are other possible problems too, suppose Google simply grafted on trustrank onto the ranking:
Rank = f(Pagerank, TrustRank, onPageMetrics, OffPageMetrics, ....)
The trust rank tells you nothing about authority, from the searchers pespective they want *authority* sites on the subject not *trusted* sites.
So it would simply skew the results to favour this 'trusted subset' of the web, whether or not that subset is authoritative or not.
Here's the confusion about search engines again:
"... As we will see in Section 6, in spite of errors like these, on a real web graph the TrustRank algorithm is still able to correctly identify a significant number of good pages."
I search for my blog, identifying spam free pages that match the same query but are not my blog won't do. I search for a product, identifying pages that are similar from companies that don't make the product is equally worthless to me.
It is more important to find things than to remove spam.
What TrustRank and PageRank do not address
Cloaking used to deceive the search engines while delivering one page to the end user and another to the search engine.
I made a suggestion to them a while ago on this (hiring Mozilla programmers suggests they may be doing this), but Trustrank assumes the Oracle flags these pages so it does cover this.
who owns the original content on any given site.
It seems that there are degrees of badness after that, and as it approaches "goodness" I am curious as to how that would translate into english.
Is good/bad as it applies to those things measured by TRUSTRANK always measured by attempts to fool the search engine? Is that what good/bad means? Good means that it isn't attempting to fool the engine and bad means that it is?
(I am new to SEO and find it very interesting.)