Welcome to WebmasterWorld Guest from 188.8.131.52
Forum Moderators: martinibuster
The results on Wikipedia pages that fed my site's PR are shocking.
It's a good idea and we wholeheartedly support the fight against spam.
On the other hand, when it comes to our particular engine, we didn't need to make a snap decision here. The nofollow idea is more urgent for Google (and those with similar approaches) than for Ask because they use global popularity (PageRank) while we use the local popularity approach pioneered by Teoma. I'm sure we'll add support for the new tag at some point in the near future if it makes sense. Blogs are a great source of authoritative information, regardless of their global pop ranking, which is what we pride ourselves on finding for our users.
Seems like they have it under control?
Put the code in a href attribute
Make sure all the code is on one line.
Open that page in your fav browser.
case Opera: right click on the link > Add link to bookmarks > tick the box 'Show on personal bar'
case IE6: drag & drop the link into the Links toolbar
case Firefox: drag & drop the link into the Bookmarks toolbar
The point is, its much easier for me to told from my app that this link request has a status of "Reciprocal Exists with nofollow Attribute".... DELETE
My link checker has saved me tons of time.
What if all of the onsite links to the link page containing YOUR link are behind rel="nofollow" links?
Your link would check as good but the page will not be indexed.
You just made my life much more difficult Conrad. :) You are right. Now I have to figure out how to check all onsite links to the links page.
Seems that Wikipedia have fully adopted the nofollow tag and have placed it on all outbound links!
On a more serious note, I'm worried about link partners that abuse the nofollow tag. It would be very easy to cloak the nofollow tag and make it nearly impossible to detect.
Problem: Too many webmasters are manipulating our PageRank technology by engaging in link swaps for the sole purpose of increasing their rankings. They are getting increasingly savvy at doing this.
Solution: Let's create a Prisoner's Dilemma of sorts. Let's allow webmasters to post some hidden data on their site (that competing webmasters cannot see), which allows them to stop our BOT from giving positive points to those sites that they link to. This way, when one website engages in a link swap with another, one will not really know whether the link partner is giving out a link, or just taking the inbound link. We'll even play with the back links look-up to prevent sites from verifying the honesty of their link partners. This will increase the number of total link requests from various webmasters initially because sneaky webmasters will attempt to obtain thousands of one way in-bound links. Then, everyone will get frustrated because so many sites won't reciprocate, and bogus link swaps will die down... and links will become a better indicator of site popularity.
Maybe this is unrealistic, but when I heard about this Google/MSN/Yahoo change, I wondered if this was their first step towards ending link abuse. In the long-term this would be good, because it would level the playing field and force websites to compete for inbound links by offering valuable and unique content. Bottom line: The quality of user experience goes up, search engine usage stays strong and grows, search engines make more money, advertisers get more business… market forces leads to more efficiency.
I see nothing on FF. Do I have to reboot or something (can't right now, too many windows open)
Yes you do if using the userContent.css as a persistent solution.. see this thread [webmasterworld.com] for more details of two ways to do this in Firefox.
Makes clean inbounds more valuable
Most sites that do cloaking place the meta "noarchive" tag so Google will not cache the cloaked page.
So, let's just say this:
Your script must check google for cache:URLofRecipLinkPage then if found, check for your link on that page. If it is found but with rel=... then flag it. If it is not found at all then it should be rescheduled for a later check. If the page never shows up in cache (30 day limit or whatever) then they're either cloaking or the page your link is on isn't getting any internal link love. Ditch 'em.
This takes longer and sucks because now you're involving google and automated queries for something that you could previously do without involving google.
I keep wanting to put my tin hat on, 'cause from the moment this was announced, the only way it make sense is if G has some insideous S.P.E.C.T.R.E.-like plan, with a goal of:
a) more easily ID'ing spammers,
b) reducing the value of links so they could later say that they are reducing their emphasis on PR (and blame it on the widespread use of nofollow),
c) ruling the world,
d) all of the above.
Not sure of how they plan to achieve option c) yet.
The biggest thing the announcement did for me was to let me know that the other two SEs also have PageRank values assigned and weighed. I guess I always knew this, but I never spent time trying to calculate it. (If this new attribute was *just* about Google, then why would the other two jump on the bandwagon as well?)