Welcome to WebmasterWorld Guest from 220.127.116.11
Forum Moderators: mademetop
Not just Google, but all the major players in the search and blogging spheres. However, this is a Google initiative.
The specific method described is correct, and the rel attribute is designed to mark "untrusted links".
if that is true whoaaa.. and it is welcomed.. not because of comment spam, but because it shows a first, small but hopefully important step to consolidating the industry :)
[added] posted before links.. didn't mean to sound like I doubted you ;))[/added]
(confirms that it was Google's idea and that MSN Spaces will be using it)
Google announcement (not live at the time of posting):
(remove space between "google" and "blog" to get the link to work)
Clearly, this system is going live by default in many major blogging applications: in future versions and updates of downloadable products such as Movable Type or WordPress, and inserted automatically into hosted solutions such as MSN Spaces and TypePad. But after that, it may very well spread into general CMSs, wikis, forums...
What will the SEs do then? Great swathes of nofollow links produced by default by a thousand CMSs. If they truly respect it and don't follow the link at all, then spidering will be severely hampered. If they follow the link but diminish its value in terms of PR or other such considerations, then PR no longer becomes a mathematical calculation, but rather something which can be controlled (and therefore manipulated) by webmasters.
This attribute breaks the web, because it breaks the value of links.
<added>From the Google Blog announcement (now live):
From now on, when Google sees the attribute (rel="nofollow") on hyperlinks, those links won't get any credit when we rank websites in our search results.
Q: Is this a blog-only change?
A: No. We think any piece of software that allows others to add links to an author's site (including guestbooks, visitor stats, or referrer lists) can use this attribute.
I really don't buy the "it's admitting they can't fix it" argument, though you could certainly take it that way. Why can't this be another tool for them to use in addition to trying to deal with it algorithmically?
There sure are problems with just ignoring comments in a blanket fashion, though that really was the only option up till now. This might allow them to follow links that they would have otherwise had to ignore. They can still ignore comments in blogs that do not implement this.
It seems that the only differences that will happen are that it will make it so that some site owners will feel like they can do something to fight back, and that it will cause a small percentage of the comment spammers to avoid those sites that implement this kludge.
I don't see it helping the search engines all that much.
also, i'd already thought about how to add it or not (depending on user status) to my blog hybrid software. ;) i don't think i'm gonna allow anonymous comments for a while, though, if ever. just because they don't count, i'm sure there are clueless scriptkiddie SEOs out there with software running on their grandmother's computer. ;)
we'll see, though. i do like the transparency, though, or at least the attempt at it...
Brad Fitzpatrick - LiveJournal
Dave Winer - Scripting News
Anil Dash - Six Apart <-- This is TypePad and Movable Type and LiveJournal
Steve Jenson - Blogger
Matt Mullenweg - WordPress
Stewart Butterfield - Flickr
Anthony Batt - Buzznet
David Czarnecki - blojsom
Rael Dornfest - Blosxom
I believe we're adding MSN Spaces to that list ASAP as well. I've been amazed at how many people are cooperating. This is gaining momentum very quickly.
Full post at [google.com...] blog/2005/01/preventing-comment-spam.html
(remove the space between google and blog to get the url)
Something new came along. Some new way to handle it came along. The reality is that user-inserted links on a page are different than owner-inserted links. Most folks already knew that. The engine's algorithms were intended to be built around valuing owner inserted links, so this change really makes no statement on the algorithms. It merely makes a statement that the engines were staggeringly slow in implementing an obvious step.
i'm sure others are doing it
i'm seriously thinking about compiling a master list of all the domains from these clowns, but it's so easy for them to get-a-new-domain.com and start all over again.
Of course, I wouldn't expect the search engines to lose sleep over this "unintended" consequence.
Another effect, of course, will be that publishers (i.e., website owners and operators) will have much, much more power in deciding what is and what is not valuable on the web (in terms of search results).
[edited by: HughMungus at 5:41 am (utc) on Jan. 19, 2005]
In order to implement this with a sense of achievement and optimism, G and the other SE's would have needed already to have thought between three and thirteen steps ahead.
Black hat site operators must be drooling at this. Either that or I must be missing something.
I take it back. You need not even be black hat to see the opportunities.
If this spec works, and works properly, it allows webmasters to create pages that effectively target onsite links precisely.
joined:June 15, 2001
Search engines are supposed to be tools to help us find content. I would expect them to change as the web does. I woulden't expect search engines to want to reshape the way the web is forming to suit their own goals.
For example with Google a link from one site to another is a vote. True, but only if it comes from the sort of site that google approves of. The rest of the web doesnt matter?
also as shri points out this tag can be used to control internal linkage, also external linkage. So now it's very easy to change your reciprical links to one way links?
I see a whole new can of worms about to be opened.