homepage Welcome to WebmasterWorld Guest from 50.19.172.0
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor 2014
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 100 message thread spans 4 pages: 100 ( [1] 2 3 4 > >     
Google Rewrites Quality Guidelines
netmeg

WebmasterWorld Senior Member netmeg us a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month



 
Msg#: 4686381 posted 3:06 pm on Jul 9, 2014 (gmt 0)

I'm going to post this link here, because it's from a credible source (WebmasterWorld user jensense) and because it touches on some of the things we've been discussion lately - particularly the Knowledge Graph. Interesting, and worth a read.

[thesempost.com...]

Here's the analysis on "Supplementary Content"

[thesempost.com...]

[edited by: brotherhood_of_LAN at 2:35 pm (utc) on Jul 11, 2014]
[edit reason] Added extra link [/edit]

 

webcentric

WebmasterWorld Senior Member Top Contributors Of The Month



 
Msg#: 4686381 posted 4:09 pm on Jul 9, 2014 (gmt 0)

This is a good read. Appreciate the post netmeg.

aakk9999

WebmasterWorld Administrator 5+ Year Member



 
Msg#: 4686381 posted 4:57 pm on Jul 9, 2014 (gmt 0)

Google wants to see a wide variety of supplementary content on a page, and are putting a greater emphasis on it as being an important and integral part of a page that is worthy of a High or Very High rating.
(...)
Essentially, if the secondary content is unhelpful or distracting, that’s a Low quality rating.

I haven't read the newest guidelines but going based on what the article says, a site cannot be a high quality if it hasn't got a good quality supplementary content.

I am wondering how can Google distinguish whether the content is a main content part or a suplementary content? By positioning? By being repeated in the same format on many pages? By something else?

If there are two sites which on their pages have exactly the same information, but one site organises it in a such a way that the content is "supplementary" whereas the other site has it organised so that it appears to be the part of the main content. I wonder would this mean that because of the page organisation one site would be worthy of High rating and the other would not.

Planet13

WebmasterWorld Senior Member planet13 us a WebmasterWorld Top Contributor of All Time Top Contributors Of The Month



 
Msg#: 4686381 posted 5:16 pm on Jul 9, 2014 (gmt 0)

Thanks for the link, netmeg!

~~~~

Welcome to the new Negative SEO:

"Most importantly, Google stresses that a webpage cannot be given a High rating if the site has a negative reputation."

netmeg

WebmasterWorld Senior Member netmeg us a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month



 
Msg#: 4686381 posted 5:23 pm on Jul 9, 2014 (gmt 0)

I am wondering how can Google distinguish whether the content is a main content part or a suplementary content? By positioning? By being repeated in the same format on many pages? By something else?


Maybe all those things. Plus whether or not the supplementary content directly relates to the navigation, page title, H1 tags (or anything else you use to try to determine the focus of a page)

I'm going to start looking fresh at some of my ecommerce clients product pages with these guidelines in mind, to see how they measure up. Already planning on how to add supplemental content that will showcase and enhance "expertise" since that seems to be just as important as "useful".

Catalyst

10+ Year Member



 
Msg#: 4686381 posted 5:41 pm on Jul 9, 2014 (gmt 0)

Thanks for sharing netmeg. These are important to read and always yield good insights.

Has anyone been able to find a link to the actual doc? (I asked Jen, but comment is awaiting moderation.)

I want to check to see if there is anything that specifically relates to Local search or small biz websites.

Linda

netmeg

WebmasterWorld Senior Member netmeg us a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month



 
Msg#: 4686381 posted 5:49 pm on Jul 9, 2014 (gmt 0)

When these things are leaked, they tend to get passed around under the table. Pretty sure if Jen had a link she felt comfortable with sharing, she'd have done so.

Jenstar

WebmasterWorld Senior Member jenstar us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4686381 posted 9:20 pm on Jul 9, 2014 (gmt 0)

I have been sworn to secrecy unfortunately, and I would like to get the next version of it too, so a leak won't come from me :)

Supplementary content means anything on the page that isn't the main content (ie. not the main article on the page). The big thing I got out of it is that they want to see things that are helpful to a user and adds to the experience, like "Related posts" - fortunately if your site is Wordpress based, there are plugins that can handle that nicely. It also can be things like tools on the page (as long as they actually work) - such as calculators or recipe specific tools.

I am doing some posts that are drilled down to specific parts I feel are the most important for SEOs, and supplementary content is one of them - and as per Catalyst, I am probably going to look at Local and small business specific stuff as well.

lucy24

WebmasterWorld Senior Member lucy24 us a WebmasterWorld Top Contributor of All Time Top Contributors Of The Month



 
Msg#: 4686381 posted 10:22 pm on Jul 9, 2014 (gmt 0)

Google is now putting a high emphasis on sites that are considered to have a high level of expertise, authoritativeness or trustworthiness.

the idea of E-A-T, which is a website’s “expertise, authoritativeness and trustworthiness”

"and" != "or"
Which is it?

:: wandering off to investigate Raleway font ::

netmeg

WebmasterWorld Senior Member netmeg us a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month



 
Msg#: 4686381 posted 10:25 pm on Jul 9, 2014 (gmt 0)

Both!

I'm thinking for my ecommerce clients, more info on how to choose the product, and how to use the product. I've been pushing them to do that anyway; this will just give me more ammunition to get them off their butts.

Sally Stitts

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 4686381 posted 11:22 pm on Jul 9, 2014 (gmt 0)

EAT - What are the EAT factors?

EXPERTISE
- Fame? (Everyone in his area knows who the person is?)
- Accomplishments? (I did this, I did that. Braggadocio?)
- Published works? (Piled Higher and Deeper?)
- Large body of work? (Can't dazzle them with brilliance, then baffle them with BS?)
- Uncontroversial? (Universally respected - never sticks their neck out?)

AUTHORITATIVENESS
- IQ Test results? Mensa membership?
- College?
- Degree?
- GPA?
- Past job titles?
- Current Employer?
- Current job title?

TRUSTWORTHINESS
- Web Rep? (No John Doe $ucks pages?)
- Email address? (Is a graphic still OK to avoid harvesters?)
- Contact page link on every page? (pound accessibility into them?)
- Physical address required? (or just snail-mail address, PO Box)
- Telephone number? (Is a graphic still OK to avoid harvesters?)
- Privacy policy?

It occurs to me that a person could register high in EAT, and still be a complete ______. Fill this in with your own word (jerk, clown, liar, etc.).

These are just my first thoughts.
WHAT would YOU add?
WHAT would YOU remove?

We have all got to try to figure this stuff out.
If you want to compete, then you MUST conform to all kinds of .... stuff. But first, you must try to figure out what the stuff is!
And how it is weighted. That's the hard part. Consensus should arise, eventually.

And then, you may have the misfortune of encountering a "Quality Rater" who is beyond clueless, with his/her own agenda, prejudices, and less than objective demeanor. Or maybe, just a bad day. You lose. Dead meat. Kiss your income good-bye.

What a game.

Saffron



 
Msg#: 4686381 posted 12:50 am on Jul 10, 2014 (gmt 0)

Good, maybe this will finally put an end to the ridiculous Yahoo answers that I keep seeing in search results.

jmccormac

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



 
Msg#: 4686381 posted 3:20 am on Jul 10, 2014 (gmt 0)

Three immediate questions spring to mind:

Just what makes those people, to reuse Lee's old expression, in Google who come up with these guidelines "experts" on quality?

Why should we, as web developers, consider a bunch of Wikipedia scrapers who probably never built a website of worth to be expert?

What have the twiddlers broken this time in Google's algorithm that they have to launch a new PR offensive about new "guidelines"?

And the bonus question: does this have anything to do with Matt Cutts taking an extended break?

Regards...jmcc

webcentric

WebmasterWorld Senior Member Top Contributors Of The Month



 
Msg#: 4686381 posted 3:42 am on Jul 10, 2014 (gmt 0)

And the bonus question: does this have anything to do with Matt Cutts taking an extended break?


It took a lot of effort to not see that question coming? ;)

Jenstar

WebmasterWorld Senior Member jenstar us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4686381 posted 8:40 am on Jul 10, 2014 (gmt 0)

And the bonus question: does this have anything to do with Matt Cutts taking an extended break?


A few people asked me that - but I had these guidelines before he announced his leave, but it took a huge amount of time to cross reference between the old and new version since it was entirely rewritten, so it wasn't posted until now.

Why should we, as web developers, consider a bunch of Wikipedia scrapers who probably never built a website of worth to be expert?


There is a huge section on how a rater can determine if something is copied from elsewhere and how to know what came first. But it wasn't new to the new version, so I didn't touch on it too much.

What have the twiddlers broken this time in Google's algorithm that they have to launch a new PR offensive about new "guidelines"?


The guidelines aren't publicly available, so it isn't a PR offensive - Google has released a copy of the guidelines, but it was a slimmed down and much older version - it was version 1, the current is version 5... you can download the version 1 here from Google: [static.googleusercontent.com...]

Only those who are quality raters get them, but they usually get leaked some time later. This one just happens to be a lot more recent that we usually see and it had some pretty huge changes.

If you want to compete, then you MUST conform to all kinds of .... stuff.


I don't see it as conforming, more doing what you can to show that your site should be considered authoritative and trustworthy. And these are guidelines for what Google is having quality raters look for, no one has to do them and raters don't directly influence the rankings, but it does provide good insight for where Google is likely headed with the algo.

:: wandering off to investigate Raleway font ::

It came with the theme, I also wondered what it was when I saw it in the CSS LOL. It's ironically a Google font.

philgames



 
Msg#: 4686381 posted 10:46 am on Jul 10, 2014 (gmt 0)

EAT - What are the EAT factors?

AUTHORITATIVENESS
- IQ Test results? Mensa membership?
- College?
- Degree?
- GPA?
- Past job titles?
- Current Employer?
- Current job title?




I see loads of payday loans and spam sites using templates with really legitimate looking contact us pages... privacy policies whats ever else.


I have 20 degrees and IQ OF 200, and have over 60 warehouses all around the world.. Thanks for your 2 minute visit mr rater.

jmccormac

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



 
Msg#: 4686381 posted 11:30 am on Jul 10, 2014 (gmt 0)

A few people asked me that - but I had these guidelines before he announced his leave, but it took a huge amount of time to cross reference between the old and new version since it was entirely rewritten, so it wasn't posted until now.
The timing is still interesting though even if they are apparently two unrelated events.

There is a huge section on how a rater can determine if something is copied from elsewhere and how to know what came first. But it wasn't new to the new version, so I didn't touch on it too much.
Ironically, Google's "Knowledge graph/Wikipedia scraper" could be considered copied content under the old guidelines. Just reading through the section on parked domains and it seems rather clueless of Google that it cannot tell a PPC landing/parked domain from an active one and even has to explain this in its guideline. It is a trivial thing to differentiate most PPC landers from active content. I do it automatically with the 110K website usage surveys and the full TLD surveys of various new gTLDs. But then the processes I use are probably different to Google's GIGO approach of spidering everything and hoping the algorithm will give it meaning.

The DMOZ thing in the old guidelines is also interesting. It was used by many web directories as a backfill or starting point and directory owners often added their own links to DMOZ data in their pages which aso included the fair use DMOZ editor/addition links. It seems that Google wanted to murder such sites despite having used DMOZ to get started. Again it would have been trivial to algorithmically detect vanilla DMOZ pages but this Infinite Monkeys approach to search quality seems to be typical of companies with more money than clues.

This is one of the unintentionally funny lines in the old guidelines:
"If you see Wikipedia or DMOZ content and PPC ads with no original content on the page, it is spam."

What must those poor raters make of Google SERPs. :)

Regards...jmcc

jmccormac

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



 
Msg#: 4686381 posted 12:04 pm on Jul 10, 2014 (gmt 0)

There's also a big flaw in the previous rating guidelines in that it does not distinguish hacked websites from ones that purposely hide links and text. I'm beginning to wonder about the clue level of the people who wrote those guidelines because these link injection hacks have a long history and the owners of the sites might often be completely unaware that their site has been hacked (typically WP/Joomla sites with vulnerabilities or vulnerable plugins).

Regards...jmcc

Jenstar

WebmasterWorld Senior Member jenstar us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4686381 posted 2:35 pm on Jul 10, 2014 (gmt 0)

Just reading through the section on parked domains and it seems rather clueless of Google that it cannot tell a PPC landing/parked domain from an active one and even has to explain this in its guideline.


Don't forget that quality raters are not SEOs or even people that are that tech savvy, so it would be like explaining what a parked page is to someone who has likely never even bought a domain name before. And domain names drop all the time, so it wouldn't be that unusual for a website to be included in the queue for a quality rater to check, and have it drop in the meantime. Google's algo filters out parked domains for the most part, and has for quite some time.

EditorialGuy

WebmasterWorld Senior Member Top Contributors Of The Month



 
Msg#: 4686381 posted 2:51 pm on Jul 10, 2014 (gmt 0)

Remember, the guidelines are for quality raters, not for site owners. And don't assume that ticking off the boxes on a checklist will move you to the top of the SERPs, any more than reading an article on "Stephen King's Top 10 Tips for Becoming a Successful Novelist" will move you to the top of the New York Times Bestseller List.

Edge

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 4686381 posted 3:30 pm on Jul 10, 2014 (gmt 0)

because it's from a credible source


I could not find their source of any references on Google. Can anybody help me?

netmeg

WebmasterWorld Senior Member netmeg us a WebmasterWorld Top Contributor of All Time 5+ Year Member Top Contributors Of The Month



 
Msg#: 4686381 posted 4:37 pm on Jul 10, 2014 (gmt 0)

I could not find their source of any references on Google. Can anybody help me?


I don't understand what you're looking for.

atladsenser



 
Msg#: 4686381 posted 7:54 pm on Jul 10, 2014 (gmt 0)

Really find the points about advertisements interesting, especially given AdSense and its link units, as those are so often placed in spots on the page where you'd think they were navigation. Will be really interesting to see if AdSense recommendations change as a result of them.

aristotle

WebmasterWorld Senior Member 5+ Year Member Top Contributors Of The Month



 
Msg#: 4686381 posted 7:55 pm on Jul 10, 2014 (gmt 0)

Google is now putting a high emphasis on sites that are considered to have a high level of expertise, authoritativeness or trustworthiness.

It's too bad that their algorithm can't actually identify such sites reliably. There are plenty of political and social issue websites full of lies and mis-information that still get lots of traffic from Google. Of course there are also many humans that can't recognize lies and mis-information either. It's a worthy goal to try to do it algorithmically, but success is still a long ways off.

brotherhood of LAN

WebmasterWorld Administrator brotherhood_of_lan us a WebmasterWorld Top Contributor of All Time 10+ Year Member Top Contributors Of The Month



 
Msg#: 4686381 posted 8:21 pm on Jul 10, 2014 (gmt 0)

Thanks for sharing. It'd seem these are the areas where human assistance can help the machine learning aspect of the algo, maybe there'll be commonality between the what and why's across sites.

One point I wanted to make, the "secondary content" could may well be AJAX fetched content that the bot couldn't see for itself.

webcentric

WebmasterWorld Senior Member Top Contributors Of The Month



 
Msg#: 4686381 posted 10:10 pm on Jul 10, 2014 (gmt 0)

I think I'm gonna "wear" a flak-vest for the rest of this conversation. ;) er, looks like the shrapnel got modded away (takes off flax vest).

The concept of human + machine analysis where quality is concerned is certainly interesting and there are a variety of things in the article that just make plain sense to me if I think about them as the basis of adding value for my audience. The authority concept is a bit of crap shoot and I'm sure we aren't far from the entire Internet being the result of work by experts (uh-huh). Now we just need to invent a BS search engine for people who don't care about the quality of information.

ken_b

WebmasterWorld Senior Member ken_b us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 4686381 posted 10:49 pm on Jul 10, 2014 (gmt 0)

A huge percentage of the people building websites in my niche are nowhere near claiming the title of "WEBMASTER". They're hobbyists/enthuisiasts who put up some kind of website or page using the easiest way they can find.

Webmastering IS NOT what they are doing, or ever will be doing!

They are just sharing their knowledge and experiences ...

and when it comes to the niche, they are the people that know what's what! They are the people that are passing along the knowledge, the "wisdom of their years" that is so helpful to others who share their interests.

I doubt if 1 in 1,000 have ever heard of any quality guidelines from any search engine.

But the quality of their shared/community knowledge is second to none.

For some group of "too smart for their own good", self-important, self-aggrandizing, engineers to even think they know how to judge the quality of knowledge that a community has amassed over many, many years is pure BS.

I'd guess that a good many of the people in this thread would look at a lot of these websites/pages and cringe at the lack of technical development, so would I, I cringe when I look at my own website.

But technical development of websites/pages IS NOT what these folks are doing.

Sharing knowledge of and enthusiasm for the niche is what it's all about!

Violations of these so called "quality" guidelines abounds on these sites and for a huge percentage always will.

.

[ There, I feel better now :) ]

jmccormac

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



 
Msg#: 4686381 posted 11:09 pm on Jul 10, 2014 (gmt 0)

Don't forget that quality raters are not SEOs or even people that are that tech savvy, so it would be like explaining what a parked page is to someone who has likely never even bought a domain name before.
The problem is that Google's approach on this is a meatbot one - outsourcing what could be done automatically or algorithmically if Google hadn't wandered off down the yellow brick road of AI telling users what Google thinks they should be searching for rather than providing the results on what the searcher wants.

And domain names drop all the time, so it wouldn't be that unusual for a website to be included in the queue for a quality rater to check, and have it drop in the meantime.
Well I do know a thing or two about domain names and how they drop. :) In .com, 2,363,211 domains dropped in June. There are complications where expired domains will move to registrar graveyard/auction sites. Again that has been a long running thing and the previous guidelines missed it. It would seem that Google's guideline writers are ignorant of the lifecycles of expiring domains and are adopting a meatbot approach to something that is very simple to automate.

Google's algo filters out parked domains for the most part, and has for quite some time.
Which is why some parked domains develop pseudo-content and try to obfuscate links. Again PPC parked pages have very clear URL signatures if you know what you are looking at.

In many respects, the reliance on human raters is the Yahooicisation of Google.

Regards...jmcc

jmccormac

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



 
Msg#: 4686381 posted 11:32 pm on Jul 10, 2014 (gmt 0)

The authority concept is a bit of crap shoot and I'm sure we aren't far from the entire Internet being the result of work by experts (uh-huh).
The reality is that the quality of information on the web varies widely and most of Google's approach has been no different to academics who are unaware of the existence of a real world where information has not been verified or quality assessed or gone through a proper ETL (extraction/transformation/loading) process. One of the classic examples that pops up regularly in the domain name industry is where some academic publishes a mickey mouse study that claims that cybersquatting is rife because people own domain names in TLDs other than .com and therefore they are cybersquatting. The problem is that these studies are based on limited understanding (if not abject ignorance in some cases) of the web and the existence of country code TLDs and other gTLDs.

The web developed but Google's search management has not. It is still stuck in the early 2000s. This reliance on human quality raters is an attempt to figure out why Google's search engine is not working. The answer is simple - the web is no longer a single entity but rather a set of markets (or webs). Some of these markets are defined by national, linguistic or cultural boundaries. Others are defined by interest or niche. But Google's one size fits all approach is why Google cannot come to terms with the modern web. Rather than fixing the holes in its bucket (algorithm), Google is trying to figure out which colour it should paint that bucket.

Regards...jmcc

EditorialGuy

WebmasterWorld Senior Member Top Contributors Of The Month



 
Msg#: 4686381 posted 12:07 am on Jul 11, 2014 (gmt 0)

This reliance on human quality raters is an attempt to figure out why Google's search engine is not working.


I think you misunderstand how human quality raters are being used. It would be idiotic for a search engine to create algorithms without human input or feedback.

This 100 message thread spans 4 pages: 100 ( [1] 2 3 4 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved