Forum Moderators: Robert Charlton & goodroi
Google Rewrites Quality Guidelines
[edited by: brotherhood_of_LAN at 2:35 pm (utc) on Jul 11, 2014]
[edit reason] Added extra link [/edit]
Google wants to see a wide variety of supplementary content on a page, and are putting a greater emphasis on it as being an important and integral part of a page that is worthy of a High or Very High rating.
(...)
Essentially, if the secondary content is unhelpful or distracting, that’s a Low quality rating.
I am wondering how can Google distinguish whether the content is a main content part or a suplementary content? By positioning? By being repeated in the same format on many pages? By something else?
Google is now putting a high emphasis on sites that are considered to have a high level of expertise, authoritativeness or trustworthiness.
the idea of E-A-T, which is a website’s “expertise, authoritativeness and trustworthiness”
And the bonus question: does this have anything to do with Matt Cutts taking an extended break?
Why should we, as web developers, consider a bunch of Wikipedia scrapers who probably never built a website of worth to be expert?
What have the twiddlers broken this time in Google's algorithm that they have to launch a new PR offensive about new "guidelines"?
If you want to compete, then you MUST conform to all kinds of .... stuff.
:: wandering off to investigate Raleway font ::
EAT - What are the EAT factors?
AUTHORITATIVENESS
- IQ Test results? Mensa membership?
- College?
- Degree?
- GPA?
- Past job titles?
- Current Employer?
- Current job title?
A few people asked me that - but I had these guidelines before he announced his leave, but it took a huge amount of time to cross reference between the old and new version since it was entirely rewritten, so it wasn't posted until now.The timing is still interesting though even if they are apparently two unrelated events.
There is a huge section on how a rater can determine if something is copied from elsewhere and how to know what came first. But it wasn't new to the new version, so I didn't touch on it too much.Ironically, Google's "Knowledge graph/Wikipedia scraper" could be considered copied content under the old guidelines. Just reading through the section on parked domains and it seems rather clueless of Google that it cannot tell a PPC landing/parked domain from an active one and even has to explain this in its guideline. It is a trivial thing to differentiate most PPC landers from active content. I do it automatically with the 110K website usage surveys and the full TLD surveys of various new gTLDs. But then the processes I use are probably different to Google's GIGO approach of spidering everything and hoping the algorithm will give it meaning.
Just reading through the section on parked domains and it seems rather clueless of Google that it cannot tell a PPC landing/parked domain from an active one and even has to explain this in its guideline.
Google is now putting a high emphasis on sites that are considered to have a high level of expertise, authoritativeness or trustworthiness.
Don't forget that quality raters are not SEOs or even people that are that tech savvy, so it would be like explaining what a parked page is to someone who has likely never even bought a domain name before.The problem is that Google's approach on this is a meatbot one - outsourcing what could be done automatically or algorithmically if Google hadn't wandered off down the yellow brick road of AI telling users what Google thinks they should be searching for rather than providing the results on what the searcher wants.
And domain names drop all the time, so it wouldn't be that unusual for a website to be included in the queue for a quality rater to check, and have it drop in the meantime.Well I do know a thing or two about domain names and how they drop. :) In .com, 2,363,211 domains dropped in June. There are complications where expired domains will move to registrar graveyard/auction sites. Again that has been a long running thing and the previous guidelines missed it. It would seem that Google's guideline writers are ignorant of the lifecycles of expiring domains and are adopting a meatbot approach to something that is very simple to automate.
Google's algo filters out parked domains for the most part, and has for quite some time.Which is why some parked domains develop pseudo-content and try to obfuscate links. Again PPC parked pages have very clear URL signatures if you know what you are looking at.
The authority concept is a bit of crap shoot and I'm sure we aren't far from the entire Internet being the result of work by experts (uh-huh).The reality is that the quality of information on the web varies widely and most of Google's approach has been no different to academics who are unaware of the existence of a real world where information has not been verified or quality assessed or gone through a proper ETL (extraction/transformation/loading) process. One of the classic examples that pops up regularly in the domain name industry is where some academic publishes a mickey mouse study that claims that cybersquatting is rife because people own domain names in TLDs other than .com and therefore they are cybersquatting. The problem is that these studies are based on limited understanding (if not abject ignorance in some cases) of the web and the existence of country code TLDs and other gTLDs.