homepage Welcome to WebmasterWorld Guest from 54.167.244.71
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor 2014
Visit PubCon.com
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

    
Monthly Algo Changes Summary - a new Google series
tedster




msg:4393292
 6:12 am on Dec 2, 2011 (gmt 0)

The community here is pretty good at catching algorithm changes, but there's no way we catch them all. That's why I was happy to see Google's "Inside Search" blog begin a regular monthly series of articles that summarize the key algo changes for the previous month - and a few hints about what's coming too.

So in November, here are some of the areas that changed according to the article:

  • Related query results refinements [keep the "rare" words from the query in the results]

  • More comprehensive indexing [increasing long tail results]

  • New "parked domain" classifier [helps keep parked pages out of the SERPs]

  • More autocomplete predictions [more flexibility they say - I'll have to take notice]

  • Fresher and more complete blog search [I thought it was already like greased lightning]

    Read more details here: [insidesearch.blogspot.com...]

  •  

    mememax




    msg:4393344
     9:17 am on Dec 2, 2011 (gmt 0)

    This one seems quite interesting too:

    • Original content: We added new signals to help us make better predictions about which of two similar web pages is the original one.[/li]


    I'd like to see how they will handle this war with scrapers and non-original content sites.
    Actually I've seen no changes in my incoming visits, if someone gets some news it could be interesting to see what has really changed.

    santapaws




    msg:4393379
     9:58 am on Dec 2, 2011 (gmt 0)

    signals? isnt first cache the first port of call. Id like to see them nail that one first. Im not saying that defines the original but if the original site had first cache by months and years and still cant outrank the scraper today, doesnt this need refining before looking for other signals?

    Andem




    msg:4393381
     10:03 am on Dec 2, 2011 (gmt 0)

    Related query results refinements: Sometimes we fetch results for queries that are similar to the actual search you type. This change makes it less likely that these results will rank highly if the original query had a rare word that was dropped in the alternate query. For example, if you are searching for [rare red widgets], you might not be as interested in a page that only mentions “red widgets.”


    It looks like this is another step backwards from their overall 'improvements' this year. How long has it taken for them to notice that users are displeased with many search results being filled up with irrelevant junk "Google thought" might be helpful? Search results have become far too fuzzy.

    New “parked domain” classifier


    Took them long enough, though to be honest I've been seeing less parked domains in their results since last year.

    Original content


    Google since Panda 1.0 can't identify 'original content' if it was smacked upside the head with it. Pre-Panda was (IMO) much more effective at identifying non-original content than Post-Panda. It looks like this blog post acknowledges there is indeed an issue. I would really like to see some work put into this part of the algorithm over the coming months.

    tedster




    msg:4393513
     6:19 pm on Dec 2, 2011 (gmt 0)

    Original content... I'd like to see how they will handle this war with scrapers and non-original content sites.

    During his Pubcon keynote, Matt Cutts addressed this challenge during the Q&A period. His suggestion for sites that have a big scraper problem was to use pubsubhubbub and make a "fat ping" to Google immediately when new content is published.

    I have just such a client, so we implemented PSHB within a week - and we are now seeing a REMARKABLE improvement in rankings. We also put a delay on the RSS feed and went from a full feed to a partial feed.

    Especially with so many syndicators and mash-ups adding legitimate value, Google's challenge of original authorship attribution is far from trivial. If a site's RSS feed goes live at publication time, then the scraped, syndicated or mashed-up version may well be cached earlier than the original. PSHB's fat ping capability seems to avoid that problem for the publisher.

    Seb7




    msg:4393700
     3:26 am on Dec 3, 2011 (gmt 0)

    fat ping - great idea, and about time. Everyone can now stop moaning about their content being duplicated and actually be able to do something about it.

    My_Media




    msg:4393830
     4:16 pm on Dec 3, 2011 (gmt 0)

    I think this plugin for wordpress work for PubSubHubbub? )http://wordpress.org/extend/plugins/pushpress/

    proboscis




    msg:4394223
     2:37 am on Dec 5, 2011 (gmt 0)

    How do you ping google other than using pubsubhubbub?

    If you don't use rss feeds.

    tedster




    msg:4394226
     3:03 am on Dec 5, 2011 (gmt 0)

    Pinging Google is inherently related to having a feed, proboscis - at least as far as I know. In fact, it is a function of Google's blog search.

    To set up automated pinging of Google Blog Search, create either an XML-RPC Client or a REST Client which sends requests as noted below. It doesn't matter which method you choose for notification; both are handled in the same way.

    [google.com...]

    proboscis




    msg:4394227
     3:31 am on Dec 5, 2011 (gmt 0)

    I see, thanks tedster.

    What is the best way tell google that a web page is the original then?

    tedster




    msg:4394237
     4:27 am on Dec 5, 2011 (gmt 0)

    The next best, you mean ;) I'd say authorship mark-up, and make sure the authors' Google profiles are rich and tied into all the social media where they have a presence.

    Then just pull out all the stops you've got - for example, update your xml sitemap immediately and re-submit it. Use social media to draw in visitors to the new page whenever you publish. All the good marketing stuff that scrapers usually don't bother doing.

    My_Media




    msg:4394241
     4:40 am on Dec 5, 2011 (gmt 0)

    SO Tedster, does this plugin works the same for wordpress?
    [wordpress.org...]

    tedster




    msg:4394247
     5:24 am on Dec 5, 2011 (gmt 0)

    It's supposed to - I haven't used it, so I can't say for sure how well it works. Any plug-in that works off the Wordpress database deserves some close scrutiny before you just "lean" on it.

    The PuSHPress plug-in for Wordpress has been around since Mar 2010, so I assume any major problems, at least, have been worked out (see the changelog, for example).

    proboscis




    msg:4394274
     7:28 am on Dec 5, 2011 (gmt 0)

    Oh yeah, next best. Thanks again tedster!

    Donna




    msg:4394278
     7:57 am on Dec 5, 2011 (gmt 0)

    Anyone noticing funny business as of an hour ago, I see a lot of movement on 3 of my websites( positive finally)?

    P.S. lol @ [yputube.com ] just misspelled it , this is where all the brain power of all those PhDs comes in, finally good at something --- > claiming misspelled domains :)

    Global Options:
     top home search open messages active posts  
     

    Home / Forums Index / Google / Google SEO News and Discussion
    rss feed

    All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
    Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
    WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
    © Webmaster World 1996-2014 all rights reserved