Forum Moderators: Robert Charlton & goodroi

Message Too Old, No Replies

Google Updates and SERP Changes - January 2019

         

Cralamarre

3:01 pm on Jan 1, 2019 (gmt 0)

WebmasterWorld Senior Member 5+ Year Member Top Contributors Of The Month



The following 3 messages were cut out of thread at: https://www.webmasterworld.com/google/4929562.htm [webmasterworld.com] by robert_charlton - 7:58 pm on Jan 1, 2019 - (PDT -8)


@MayankParmar, it's just New Years. Same thing happens every year. My traffic won't start to really pick up until next week.


[edited by: Robert_Charlton at 4:04 am (utc) on Jan 2, 2019]
[edit reason] Splitting thread to new month and new year. [/edit]

broccoli

2:45 pm on Jan 31, 2019 (gmt 0)

5+ Year Member Top Contributors Of The Month



I can’t explain why a linked page isn’t getting indexed but the theory is that your links have to have links now. Black hats think it’s great because they have endless resources to power up their links with other links. They’re paying for articles on news sites then pointing pbns to the articles. Meanwhile white hats who can’t control their links are getting punished.

TalkativeEditorial

3:16 pm on Jan 31, 2019 (gmt 0)

5+ Year Member Top Contributors Of The Month



My client is a news site, though, and even the original copy (not from wires) like blogs is getting nothing. But even weirder is that it shows up with the link is pasted directly into Google. Very strange.

WalterPi

4:22 pm on Jan 31, 2019 (gmt 0)

5+ Year Member



There is something weird about the Dutch version of Google. Websites that scrape content from other sites suddenly rank very high in the search results. Often not once, but several times. What ichthyous noticed I can see as well. Content created by users is hardly appreciated by Google.

These is another problem. Websites will very little content also rank high. I can even see results with no content at all. Very poor user experience, but Google likes to rank them high.

broccoli

4:35 pm on Jan 31, 2019 (gmt 0)

5+ Year Member Top Contributors Of The Month



It’s because they’ve devalued links and other quality factors so much in favour of keyword density. The guaranteed way to get the highest keyword density on the page is to have a very short page with hardly any content except your keywords in the title and some LSI and partial match words in the body. I have longer pages and have had to increase my keyword density to compete with the shorter pages outranking me. I’ve effectively had to keyword stuff in order to get my rankings back. It doesn’t work straight away but after a few weeks of the first round of this I saw a clear correlation between the pages I altered and my rankings. I regained some #1 positions.

WalterPi

4:43 pm on Jan 31, 2019 (gmt 0)

5+ Year Member



Realy? That's crazy. I am realy dissapointed websites with scraped content a ranking high. They just know Google ranks them high at te moment. These kind of websites collect information and bundle them together. They don't invest hours writing content, they just spider, spider and spider and Google loves it. Don't understand. You would think these kind of website would get a penalty, but no...

broccoli

4:52 pm on Jan 31, 2019 (gmt 0)

5+ Year Member Top Contributors Of The Month



The spam detection systems in non-English Google are quite primitive compared to English Google. Many serious/criminal black hats who scrape content and use hacked links and sneaky redirects have moved to non-English Google because it’s much easier for them to game the search results there.

ichthyous

4:58 pm on Jan 31, 2019 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I’ve effectively had to keyword stuff in order to get my rankings back.


That is a recipe for disaster I think. It will help you short term at best, and then when they revert to penalizing keyword stuffing again? I am not convinced that Google would regress that much that such simple tactics from circa 2003 would work now. But I do agree that link value is being devalued overall, perhaps because they realized that that is also an archaic way of calculating rank? Let's face it, for every search there are potentially tens or hundreds of worthy sites they can rank at top for the quality of the content and many are closely competitive. I think Google hasn't figured out yet what their new ranking paradigm will be.

broccoli

4:59 pm on Jan 31, 2019 (gmt 0)

5+ Year Member Top Contributors Of The Month



@TalkativeEditorial There has been a mysterious “bug” in Google News for months now where some sites are not showing up. They keep claiming to have fixed it and then it comes back. I wonder whether it has something to do with their attempts to tackle fake news but it’s not working properly.

broccoli

5:07 pm on Jan 31, 2019 (gmt 0)

5+ Year Member Top Contributors Of The Month



@ichthyous

That is a recipe for disaster I think.


I agree, but as long as I’m only a tiny fraction higher than everyone else on the serp I don’t think the punishment will be too great. I will simply rewrite my pages again when Google stops acting like it’s 2003. Until then I have to make a living. Though if they start to put the emphasis back on quality, links, and social, those factors will actually boost my site. :)

ByronM

5:36 pm on Jan 31, 2019 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I am seeing lots of discussions on crawled but not indexed pages... on facebook a lot of people are calling this some kind of crawl quota where google seems to be limiting how much of your content is crawled so a lot of people are deleting old pages and old content to try and get around this... seems backwards to me but i wouldn't be surprised if there is some truth to crawl depth being on some sort of score/rank/quality scale.

Selen

6:10 pm on Jan 31, 2019 (gmt 0)

10+ Year Member Top Contributors Of The Month



If user-generated content is somehow devalued then it makes sense that websites with scraped content come on top because they are no longer user-generated.

dethfire

4:04 am on Feb 1, 2019 (gmt 0)

10+ Year Member Top Contributors Of The Month



how can google determine the difference between a scraped user generated site and an active user generated site?

WalterPi

11:43 am on Feb 1, 2019 (gmt 0)

5+ Year Member



It is the amount of unique text on a website. Websites that use generated content only spider content from other sites and combines this. The algorithm of Google is smart enough to recognize this.

For example:
Spider source 1: 25 degrees
Spider souce 2: strong wind
Spider source 3: no rain

Combined (automated text): Today it will be [source 1]. There will be a [source 2]. You don't need your umbrella because there will be [source 3].

System

2:01 pm on Feb 1, 2019 (gmt 0)

redhat



The following 2 messages were cut out to new thread by robert_charlton. New thread at: google/4934804.htm [webmasterworld.com]
7:24 pm on Feb 1, 2019 (PDT -8)
This 224 message thread spans 8 pages: 224