Forum Moderators: Robert Charlton & goodroi

Message Too Old, No Replies

Google Starts Supporting Authorship Markup

         

engine

4:15 pm on Jun 7, 2011 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



Google Starts Supporting Authorship Markup [googlewebmastercentral.blogspot.com]
Today we're beginning to support authorship markup -- a way to connect authors with their content on the web. We are experimenting with using this data to help people find content from great authors in our search results.

We now support markup that enables websites to publicly link within their site from content to author pages. For example, if an author at The New York Times has written dozens of articles, using this markup, the webmaster can connect these articles with a New York Times author page. An author page describes and identifies the author, and can include things like the author’s bio, photo, articles and other links.

If you run a website with authored content, you’ll want to learn about authorship markup in our help center. The markup uses existing standards such as HTML5 (rel=”author”) and XFN (rel=”me”) to enable search engines and other web services to identify works by the same author across the web. If you're already doing structured data markup using microdata from schema.org, we'll interpret that authorship information as well.

badbadmonkey

1:05 pm on Jun 8, 2011 (gmt 0)

10+ Year Member



Wheel I think you're being a little paranoid. It's just a link semantic, a piece of metadata. From Google:

When Google has information about who wrote a piece of content on the web, we may look at it as a signal to help us determine the relevance of that page to a user’s query.


So we may expect it to contribute to SERP rankings. If you host an article (for now with the author page on your own site but as above this will not permanently be the case) by an author, your page for the article may rank more highly if Google can tell for sure who the author is, particularly I guess if the searcher is looking for that person. A search for "john smith on widgets" would therefore favor a page loaded with keyword 'widgets' and marked as authored by John Smith... whereas without that the SERPs might look a bit more like a list of pages concerning both widgets and john smith, see?

wheel

1:19 pm on Jun 8, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I look paranoid. I disagree that I am though. I'm realistic.

You would be better advised at looking towards the nofollow debacle than the rel=canonical tag for comparison. Remember? nofollow stopped spam? How'd that work out? No change in the webmaster's workload, there's still spam. But now Google's got a club to use on webmasters that has nothing to do with spam.

This does nothing for webmasters outside the NYTimes. Nothing. Why you would even consider it is beyond me. Somebody please explain how you look at this and think "this gives me more money at the end of the day".

Badbadmonkey, your entire second paragraph can be dismissed by points I've already made in this thread. If I want to attribute an article, I'll use a link.

What you're going to see is the scrapers eat you freakin' alive over this. That's going to be the fallout.

Right now, it's onsite. And by that, it means it's useless.

As soon as they take this offsite, your unique content is going to be screwed. 10,000 scrapers are going to republish your article and attribute it to someone else. How's that going to work out for your ranking thesis?

And if Google fixes that problem, the scrapers will find some other way to leverage this.

In fact, the more I think about this, the more I think it's going to screw me as an author even if I don't use it - because it's going to be used against me even if I don't implement it myself.

The problems are clear. The benefits are nonexistent. Google's track record on ownership and attribution are beyond dismal.And there's a million scrapers out there with more time and smarts than either me or Google, waiting to use this to outrank you.

The community should be making efforts to prevent the implementation of this tag, not trying to figure out how it can help you (which it doesn't).

badbadmonkey

1:23 pm on Jun 8, 2011 (gmt 0)

10+ Year Member



The rel="me" allows greater certainty of identification and solves the issue of duplicate author pages. Would be interesting to see if it gets implemented on YouTube for example - a way to link to external site author pages for an author's YouTube account? Videos there already have rel=author links to the YouTube user page, but no rel=me option to link to an external bio page that I can see.

My question is can it be done with <link> instead of <a> if you don't want to have links on the page?

Also, how does the old <meta name="author" content="name"> figure into this, for those of us pedants who have maintained it? Discard it or can it be extended similarly?

badbadmonkey

1:38 pm on Jun 8, 2011 (gmt 0)

10+ Year Member



Badbadmonkey, your entire second paragraph can be dismissed by points I've already made in this thread. If I want to attribute an article, I'll use a link.

I don't see why. I also think that your second sentence above defeats your scraper complaint - if it's of no benefit to Google or SERPs then it can't be of benefit to scrapers either. Insofar as scrapers are already a problem, this won't change anything - it's neither a solution* nor a worsening, only a new piece of semantic linking.

I like the idea of searching for authored work and not getting a pile of results concerning or reviewing the author rather than actually written by him. That's useful.
* Although it's easy to imagine how it could help, particularly if Google can come to trust one domain or another as the recognized source of a given author, maybe where the main author page is located - then first publication and a rel=author claim on the new article might be able to go some way to fending off the pirates. I concede I may be being hopelessly optimistic here though.

wheel

1:43 pm on Jun 8, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



lol. I did a search on 'duplicate author pages' on Google. One real result for that search, and it had nothing to do with Google.

I know I'm beating a dead horse. But you're trying to find ways to implement something here just because Google handed it to you. I'm saying you're looking at a baited fish hook. You're looking at it wondering how you can use it to your advantage - even though there's no real use for this. I'm right to be suspicious on this on.

In fact, the arguments we're hearing are almost clones of the comments being made pro-nofollow back in the day.

netmeg

2:11 pm on Jun 8, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I'm pretty much of the same opinion as wheel. I can't see the benefit. Will shelve until they do whatever it is they're going to do in the next iteration and then look at it again.

Besides, whenever Google says "Look! shiny!" over here, that's a strong signal you need to look over *there* to see what it is they're really doing

badbadmonkey

2:22 pm on Jun 8, 2011 (gmt 0)

10+ Year Member



I did a search on 'duplicate author pages' on Google. One real result for that search, and it had nothing to do with Google.

I didn't say it was a problem now, but it's an obvious 'gotcha' of rel=author.

the arguments we're hearing are almost clones of the comments being made pro-nofollow back in the day.

Not from me; what I'm saying reduces essentially to: I'm happy for a possibly useful (w.r.t. SERPs) additional semantic tag / meta mark-up.

I don't really see the relevance to nofollow, which I for one see as short sighted and negative; the only significant effect of which on the web in general has served to allow Wikipedia to dominate the SERPs for topics without giving any link credit to their references and external links.

coachm

7:42 pm on Jun 8, 2011 (gmt 0)

10+ Year Member Top Contributors Of The Month



Since I started a thread elsewhere on how Google's use of social media signals removes incentive to create great content, I should congratulate google on this, PROVIDED they eventually extend this properly.

When it goes cross-domain, here's the benefit for those of us who are "authorities" in the real world, particularly authors.

If/when this gets trucked out to places like amazon, (and it would be simple for amazon to add a field in its authors control panel for us to add this link, then it unifies the articles I have on my site(s) with the many books I've published, and it can then be used as an important signal to indicate "this guy is published so he's gotta have "something" going. If authority influence SERPS' this has the potential of adding a semi-legitimate signal about the authority of web articles.

And that means counter-acting the "popularity" = "authority" coming from social media.

Obviously I'm biased, but I figure I should get some brownie points for having published so prolifically in book form.

IF.

wheel

7:52 pm on Jun 8, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I don't really see the relevance to nofollow, which I for one see as short sighted and negative;

The relevance is that all the webmasters snatched it up and thought up all the new ways they could use it to their advantage. Claims of potential problems were ignored. it was only 'how can I use this on my site, to my advantage'. Just like you're doing now :).

And unlike this time, at least there was a valid reason for the nofollow tag - spam. And like I said, how's that workin' out?

viggen

8:11 pm on Jun 8, 2011 (gmt 0)

10+ Year Member



...this guy is published so he's gotta have "something" going.


...isnt it very sad that google needs a new markup to figure that out...?

tedster

8:12 pm on Jun 8, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Schema.org has been updated to remove the INCORRECT information that schema.org microdata and RDFa should not be mixed on the page. They absolutely can be mixed.

@bsletten

Kavi: "It was an honest mistake to tell people not to mix RDFa and schema.org microdata. We have removed that from the FAQ."

[twitter.com...]

coachm

8:42 pm on Jun 8, 2011 (gmt 0)

10+ Year Member Top Contributors Of The Month



wheel
The relevance is that all the webmasters snatched it up and thought up all the new ways they could use it to their advantage. Claims of potential problems were ignored. it was only 'how can I use this on my site, to my advantage'. Just like you're doing now :).


I'm not understanding why this is a problem? Why wouldn't a webmaster try to figure out how to use a new tag, given that he or she has no control over Google?

...if for no other reason than there are millions of other webmasters trying to leverage tags.

mhansen

2:29 pm on Jun 9, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Circa 2003: Don't do anything special for search engines...

2011 ... unless it helps Google improve its own product.

I thought I read somewhere (I am trying to find the link) that Matt Cutts said something to the effect of: "You can imagine a search result when the author picture may show next to the article title"

Found it: [outspokenmedia.com...]

Danny: Okay, say I do that. I link my byline to my profile and now you understand that this is written by me, Danny. [Matt nods]. In the future, I can write on my personal blog and get credit for it. It sounds like you’re establishing personal page Rank.


Matt: That’s the hope – AuthorRank. We’ll see what the traction is and then over time we’ll try to annotate it in the search results with a picture of Danny. Or maybe a panda…


(Bolded by Me)

So... do we really think Google will use this to help US the Webmaster? Or eventually just scrape all our "Author" tagged content and show a Google page, like they do with Google Places? After all, its just a directory of Authors at that point.

Instead of "Places" we will ultimatly end up with "Google People" which replaces the whitepages. Hey, if you have a Google profile, we'll even let you setup a special page on our site to tell people where you live, when you work, what your home hours are, etc etc.

wheel

2:49 pm on Jun 9, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Why wouldn't a webmaster try to figure out how to use a new tag, given that he or she has no control over Google?

You do have control over Google. But only as a group.

The group put Google on top. The group allowed Google to use nofollow. The group can decide NOT to implement this, leaving Google with nothing. C'mon fellow sheeple!

We've already got a good list of reasons going why you don't want to do this:
- Google has a strong history of ignoring webmasters.
- Google has a strong history of doing bad things with content ownership (see panda).
- Google has a poor history of being concerned about copyright infringement (see book scanning, blogger.com).
- Google has proven history of subverting tags for their own purposes, to the disadvantage of the webmaster (nofollow)

So how can Google screw you over on this?
- as mhansen notes, they could use this to keep traffic on their properties rather than yours (again, strong past evidence that Google wants to do this).
- scrapers figure out how to subvert your listings. Scrapers are going to be intensely interested in this. It's bad enough trying to stay on top of the content theft, never mind if they manage to take your rankings.

Start building a list of things that can go wrong with this tag. It's a lot longer than the things that can be beneficial for the average webmaster.

smithaa02

2:53 am on Jun 13, 2011 (gmt 0)

10+ Year Member



So I'm interested in a at least trying this...how exactly does it work then?

I checked mattcutts.com as a reference and he uses...

<span class="author vcard"><a rel="author" href="http://www.mattcutts.com/" class="url fn">Matt Cutts</a></span>


...after each post intro on his homepage and then again on the main article page.

I assume the vcard attribute is not relevant?

Matt's "author page" is simply his homepage. ...that seems easy. So why create separate author pages if you can simply attribute authorship to your homepage?

So if a scraper copies my content (including my author tag) I can see how google could catch them (since they were foolish enough to copy my author tag linking to my site).

What if the scraper just copies part of my content and includes on say scraperx.com/fudgedpage.html an rel=author back to scaperx.com. Google now has a dilemma...same content on both sites but both sites claim unique ownership with this new tag. Am I totally missing something or is this a huge flaw in google's plan? What is to stop every site on the web from just putting an authored by theirsitename.com in their template footer (with the author rel tag linking to the homepage) and then aren't we back to square one?

indyank

3:47 am on Jun 13, 2011 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Am I totally missing something or is this a huge flaw in google's plan? What is to stop every site on the web from just putting an authored by theirsitename.com in their template footer (with the author rel tag linking to the homepage) and then aren't we back to square one?


absolutely...you are missing it totally...Google doesn't care about who the content owner or the author is. It is just to help themselves set up a property, as explained above.

You have cited a very good example from one of google's very own bosses.Look at the way he has set it up (or someone from google has set it up on his behalf) for author attribution and you will immediately realize whether google is really serious about the real authors or webmasters or themselves.
This 46 message thread spans 2 pages: 46