Forum Moderators: Robert Charlton & goodroi

Message Too Old, No Replies

Google Human Editors - Common Misunderstandings

         

mboydnv

3:58 pm on Oct 7, 2016 (gmt 0)

10+ Year Member Top Contributors Of The Month




System: The following 35 messages were cut out of thread at: https://www.webmasterworld.com/google/4820506.htm [webmasterworld.com] by goodroi - 9:04 am on Oct 10, 2016 (utc -5)


It's ugly here too. Lots of new ranking gains #11 (never #10) but no conversions.

It's really sad what Google has become. What i wouldn't give to see 2 other engines cutting into their pie.

What is also happening is the money begets money principle. I see so many niches being dominated by those making the big $. They in turn pay for ads then are rewarded with #1 organic rankings. It's happening everywhere it seems. Tourism Las vegas is filled with the big money. A new company comes along with a few million behind them, pays $50K in ads for the month, then wind up becoming #1 in local and #1 below the ads in rankings. They reap in all the money from being #1 and keep feeding the Google machine.

I have no desire to do any more SEO or yoast BS. It doesn't work. I've done it all. Believe me. The only thing that works is spending big money on ads. Don't believe in the slick SEO garbage.

Even Amazon is becoming a toilet. People think they can give a way their product at a discount for an honest review and are finding people turning against them for not having verified reviews.

So hard to have any energy in the mornings to battle my competition. I've been doing this 12 years too. I also believe Google human editor has us filtered to not go #10. any rankings we ever get stop at #11.

So where are the conversions. Just how can they take that away? Facebook ads, Google ads, organic traffic, nothing is converting...

graeme_p

12:00 pm on Oct 11, 2016 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



This sounds like standard supervised machine learning stuff. You pick a sample, get humans to rate the sites, and then adjust the algorithm to rate those sites in line with the human ratings by, for example, changing the weighting attached to the factors the algo uses.

The weakness I can see here was that the people doing the rating are not representative of the audience of every site, and they are apparently not allowed much time to rate the sites. Cindy, I take it the sites you rated were not tailored to your interests or knowledge?

Cindy_B

12:52 pm on Oct 11, 2016 (gmt 0)

10+ Year Member



No, they weren't. I remember rating things like copyright date, if the site had contact info or not, loaded with ads or had real content, adult content present, good grammar (spelling, punctuation, etc), even...but this was like four years ago, so things may have changed or evolved quite a bit since then.

As for when we were rating search results, it was ad placement on the page, how well the results matched the query, things like that. You were given two examples and had to pick between the two the "better" one, which entailed paying a visit to the site to validate, so it was very fast-paced. As I remember, tasks that were more would be given more time than simple rating tasks.

So even task selection was rater-influenced, meaning that the simple tasks were "taken" first by raters, leaving more complex tasks as "crumbs", because of the time restrictions of each tasks, so the "cream" was taken first by experienced raters, just so they could meet their quotas, less-experienced raters many times got more complex tasks for which they couldn't meet the time quotas, and subsequently got the ax (like me, probably:).

martinibuster

6:15 pm on Oct 11, 2016 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month




"This sounds like standard supervised machine learning stuff. "

Exactly what I've been trying to communicate in the most simple way possible. Finally someone else gets it. Anyone else? :)

robzilla

6:48 pm on Oct 11, 2016 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



So the raters manually rank the search results?

martinibuster

9:08 pm on Oct 11, 2016 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



Yes, but only for the purposes of quality control or creating a body of data for the machine to learn from, and in both cases it is in order to improve the algorithm at scale. It is not for individually adjusting the SERPs a keyword phrase at a time.

robzilla

7:36 am on Oct 12, 2016 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I was trolling. Forgive me.

If that was four to five years ago, and they're still employing raters, the AI must now have quite a bit of data and "experience" under its belt. I wonder if it's easier to fool the AI into thinking a web page is of high quality (seeing as it can only judge certain facts), or a human rater (who also has feelings and intuition). Does anyone have any experience optimizing according to the rater guidelines and subsequent beneficial effects on rankings? I've dug into research papers on web page quality evaluations in the past, and have made adjustments accordingly (which, I should add, probably did increase quality), but it's hard to say whether those changes have had any effect.

Given the time-sensitivity of the ratings tasks (and the fact you could, and did, lose the job from not being "efficient" enough), would you be expected take a web page or website at face value, or to really browse through it and get a sense of the type, quality and quantity of content a website has to offer? I'm also interested in how much the overall design of a website would be relevant to rating it. I don't really see anything concrete about design in the guidelines.

graeme_p

8:00 am on Oct 12, 2016 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



The algo is optimised to suit the raters.

So, even if your site is aimed at elderly billionaires, to rank well you need the sort of site that will make a good first impression on people in their 20s (graduates on $10/hour are likely to be young) making $10 an hour.

@robzilla,

the process is essentially this

1) The raters rate large numbers of sites and search results

2) The algorithm is automatically adjusted to rank as closely in line with the raters ratings as possible (yes, there is an algorithm adjusting algorithm). It will never be a perfect match, or anything like it, but it will mostly give results ranking what the raters liked high.

The algorithm does not incorporate any ratings. The adjustments will most likely be to things like how much weight the algo gives to particular factors - e.g. if increasing the importance of exact matches with the site name brings the algo's results more in line with what the raters liked, then the importance of that in the algo will be increased. Do that across all the factors Google uses.

3) The algorithm will then be tested with fresh data from the raters to validate it.

4) If 3 is satisfactory it will be deployed.

Google is a leader in the field, so its process will be a lot more complex and sophisticated, but it will be something on those lines.

@martinibuster, most of us are in the business of developing (or designing) websites, not machine learning experts, so I think this is a difficult thing to explain. I have taken an interest in machine learning for (essentially non-web, although it may have web front ends) other projects so I ought to get it now.
This 37 message thread spans 2 pages: 37