| 10:12 pm on Mar 6, 2013 (gmt 0)|
@Martin Ice Web
If you read my previous post it would be clear that i think 404 pages can mean that something is wrong... If lot of pages link to them etc. But generally it does not.
After that first post of mine, i found the google advice which correlates with my opinion. Read between my lines :)
I haven't seen any changes in traffic to any of my sites in the last 24 hours.
Mostly informational sites, location: USA
| 10:18 pm on Mar 6, 2013 (gmt 0)|
I have been watching the rankings very closely. I am expecting to recover from 2 years in Panda hell with the next panda update. I have seen nothing as of yet that indicates Panda movement. I do notice slightly less traffic today then normal, so it could be other ranking algo changes - unless something is starting in Europe that hasn't gone on in the U.S. yet - which I don't think has happened in the past (has it?).
I am hoping for Panda soon - but just haven't seen anything.
| 10:37 pm on Mar 6, 2013 (gmt 0)|
|unless something is starting in Europe that hasn't gone on in the U.S. yet - which I don't think has happened in the past (has it?). |
If i'm not mistaken this happened before yes.
It is also possible G. just got personal on a few of us..
| 12:10 am on Mar 7, 2013 (gmt 0)|
is it related to europe? I had just moved my site to european hosting a week ago,
| 3:01 am on Mar 7, 2013 (gmt 0)|
i am not getting this latest shift/ update..
How can a site which discovered this microniche vanish completely from serp, where as the guy who scrapped the content and started 2-3 scrapper site out of the original site's content is ruling the niche.
this is totally unfair... not seeing any shift on my other sites, except this one
| 6:15 am on Mar 7, 2013 (gmt 0)|
I lost my inner pages rank so something changes bigger than we thought ... my site page rank is 4 and my inner pages rank was 3. Suddenly all are gone and ranking as well. I don't know what's going on.
| 8:47 am on Mar 7, 2013 (gmt 0)|
It seems that the URLs I had Google remove using Webmaster tools resurfaced on March 4. I have since blocked them by robots.txt just to make sure. I didn't block the robots before because some of the pages 301 to others, so I thought I might as well keep it crawlable but not indexed (the pages were moved to a subdomain). So instead, now they just can't be reached at all by directory.
| 2:26 pm on Mar 7, 2013 (gmt 0)|
I'm not sure what's going on either, there's no official release from Google yet. I'm hoping someone can shed some more light on it for us :)
| 2:45 pm on Mar 7, 2013 (gmt 0)|
Yes. Can anybody come up with some theory! Why only we are suffering, while rest are not observing any shift
| 2:56 pm on Mar 7, 2013 (gmt 0)|
Google penalized another link network yesterday - Sape Links, and all those using their services too. See SEroundtable.
| 3:08 pm on Mar 7, 2013 (gmt 0)|
it was a panda, it was just a deep crawl... flux could be arised by many reasons... update of content could be a possible reason of changing in positions... I used to see changes frequently in a month or so...
| 3:08 pm on Mar 7, 2013 (gmt 0)|
it wasn't a panda.. sorry mistakenly written..
| 3:12 pm on Mar 7, 2013 (gmt 0)|
|Google penalized another link network yesterday |
I think penalties are the problem. Positive scores for relevance work well enough. Negative scores for relevance don't work. I need my customers to buy my product, not to sell me a minus quantity of it.
| 3:57 pm on Mar 7, 2013 (gmt 0)|
|I think penalties are the problem. Positive scores for relevance work well enough. Negative scores for relevance don't work. |
Yes, well said, thats really the fundamental shift thats been going on over the past 2 years. The moving away from a long history of measuring and rewarding signals of quality, to seeking out and punishing behaviors and site traits that Google sees as artificially inflating the ranking of sites that dont fit their idea of quality.
Panda, Penguin, EMD, links that dont pass the sniff test da jour.
I get it, and what the hoped result will be but its an approach that is worthy of debate, "if we can figure out how to remove all the stuff we dont place value on - whats left should all be of high quality"
|Martin Ice Web|
| 4:47 pm on Mar 7, 2013 (gmt 0)|
We have unusal much traffic ( + 20% ) for a thursday. Unfortunatelly it does not convert. Poeple fetch 3-6 sites, bounce rate is very low. Time on site is up.
Although we lost many positions till yesterday, we have more traffic. We see many new sites.
The very best is, that there is a site on #10. When you open it, you get a php-login with nothing on the site but the title with the key in it. ( Quality wins ! )
| 4:54 pm on Mar 7, 2013 (gmt 0)|
|The very best is, that there is a site on #10. When you open it, you get a php-login with nothing on the site but the title with the key in it. ( Quality wins ! ) |
So do you think that's more likely the permanent number 10 result or some type of visitor interaction testing where the "bad" results will begin to disappear over the next little while, like over the next week or two?
|Martin Ice Web|
| 5:13 pm on Mar 7, 2013 (gmt 0)|
@TOI, I think g is testing ( is it testing? ) every second. There are so many silly pages out there only because they link to brands. This pages are only affiliats or fully of adsense. This panda algo shouldnīt push them and it is overdue to implement this into panda. IMO we are close to a new panda and compiling is still in progress.
What makes me wonder is that they say ( in this other threat: How ysearch works ) they test their new algos before letting them into the wild. Then why are so many nonsens pages on first result page?
| 5:28 pm on Mar 7, 2013 (gmt 0)|
|There are so many silly pages out there only because they link to brands. |
|What makes me wonder is that they say ( in this other threat: How ysearch works ) they test their new algos before letting them into the wild. Then why are so many nonsens pages on first result page? |
Well, one thing I've heard them say before is when you're dealing with the numbers they are there's only a limited amount you can actually test internally, then you have to "run with it" and let "other factors" influence where things end up.
There are some really interesting things they deal with most of us don't think about, like language detection and "maybe using grammar and spelling someday", because when they try to detect language and 80% of a page is in English, but 20% is in Greek when you "run a grammar check" algorithmically, you get "grammar on that page sucks" even though it might be 100% correct in both languages.
So there's some "little details" they have to take into account to make sure they don't "throw the baby out" (too often anyway) and to do it they have to "err on the side of caution" then rely on other signals to "do the dirty work" when they're not 100% sure algorithmically.
The page you're talking about seems easy, and I think sometimes it's almost a case of in a "machine learning system" you might have to show it N cases of the "really bad" for it to learn "don't show any of those, with a few exceptions".
But even when it looks simple they still have to make sure they get it right, and if they "just threw out" pages with limited text, a login/account creation form and nothing else on them, they'd look like fools, cause Facebook and Twitter would both disappear, so I think what they have to do initially is leave all those type pages "in" and the let "behavior and other signals" push the bad ones out and the fastest place to get them "pushed out by behavior" is higher in the rankings.
I'm not 100% if that's why garbage shows so high sometimes right after an update, but I think it could be "when it's questionable" the algo "bumps it up a bit" to get the behavior signals and it either sticks or drops over a shorter period of time than it would on say page 30.
I guess another way of saying it is: By pushing "questionable" or "looks bad but not sure, so let's find out" up into the "higher interaction areas" of the results they can "clean the index out" faster than they could by leaving "questionable" or "looks bad but not sure, so let's find out" results on page 5 where's there's not enough interaction to "indicate good or bad" result for what could be a very long period of time.
| 6:18 pm on Mar 7, 2013 (gmt 0)|
What kind of test they possibly must be doing?
I invented this micro niche, and someone scrapped my complete site, and now I am the one who has vanished and not them, what kind of test is this?
| 6:18 pm on Mar 7, 2013 (gmt 0)|
I feel sorry for those who have designed their own website and haven't discovered this forum and don't know why their ranking has dropped so many times in the last 2 years. On the other hand we aren't that much better off.
| 6:25 pm on Mar 7, 2013 (gmt 0)|
|What kind of test they possibly must be doing? |
Which site do visitors respond better to in the results?
|I invented this micro niche, and someone scrapped my complete site, and now I am the one who has vanished and not them, what kind of test is this? |
Google does not usually know all of this.
The other site(s) that scraped you could be yours or someone else's, so unless they know for sure one is "the original" and "one is the copy" and they're from two different sources and the "copy" doesn't have permission to use the content on the original they go with "the most popular" based on links, user behavior in the results and other signals, so unless you make them aware the sites that are scrapes of yours are not yours and they don't have permission to use your content by doing something like a DMCA filing against the scraper sites there's a good chance the scraper sites will continue to out-rank you, especially if there are "negative signals" on your site that are somehow not on the scraper's.
[edited by: TheOptimizationIdiot at 6:30 pm (utc) on Mar 7, 2013]
| 6:29 pm on Mar 7, 2013 (gmt 0)|
|On the other hand we aren't that much better off. |
Yep, the last two years of Pandas, Penquins, Algos and gawd knows what else have seen my overall traffic drop by 60%, meanwhile I see my stuff scraped and images reproduced everywhere...thanks heavens I don't rely on Google for a living.
| 6:35 pm on Mar 7, 2013 (gmt 0)|
|so unless they know for sure one is "the original" |
Don't they have a simple thing such as a calendar at The Plex? It's mind-boggling that they don't know which was published first!
How can a site published 7 (seven) years after mine scrape me and outrank my images without a totally flawed algo?
| 6:39 pm on Mar 7, 2013 (gmt 0)|
And this scrapper site has set up sites by scrapping my titles.. so basically he has got thousands of thin pages ranking high... while my pages appeared to have gone AWOL... for the keywords that i had been ranking 1 since start of this niche, now I am not even in top 20 pages
|Martin Ice Web|
| 7:17 pm on Mar 7, 2013 (gmt 0)|
@TOI, i rather think that they donīt catch them because this sites run into panda algo errors or there is not enough input for the algo. You yourself said it, they are dealing with a large amount of factors. It is natural that errors occurs within the compiling of a page. Maybe those very thin pages run into a algo error and get a not able to compile flag, what this page pushed to the front only to be catched by another algo sometimes in the future. Think of the many errors in operating systems for platforms ( WIN, UNIX, OS ) or browsers.
on the other hand, the silly pages with only some links to brands are fitting perfect into the theory that i mentioned and tedster brought to the point: "know your neighborhood"
| 7:56 pm on Mar 7, 2013 (gmt 0)|
What blows my mind is when scrapers steal your images, then rename them in wordpress to something like "keyword-keyword-keyword-keyword-keyword-keyword-keyword-keyword-keyword-keyword-keyword-keyword.jpg" and it outranks my pages. Wordress pages always appear to get "initial" preferential listing, prob due to the valid code and SEO plugins. It seems clean code and image title keyword loading has a higher rank than authoritative, intelligible content. Luckily those type pages and their black hat tricks don't hold water for very long.
| 8:07 pm on Mar 7, 2013 (gmt 0)|
@Martin Ice Web I think we're saying very similar things with different words, and I think how ever we "spell it out" the point we both seem to be stating about why things are the way they are sometimes "fits" fairly closely with at least some (if not quite a bit) of what's going on with the SERPs.
| 8:11 pm on Mar 7, 2013 (gmt 0)|
The issues of scraping and DMCA are hashed over in many other threads. Let's return this thread to the topic, please - updates and SERP changes.
The pattern that I struggle with is little-by-little ranking drops that just keep going. After a couple months, it's as much of a traffic hit as a fat penalty or a major algo change, but it's almost impossible to pin this little-by-little "traffic erosion" down to any one date.
I'm now involved with two sites that have been showing this pattern over the past 3 months. Any ideas what this kind of thing represents?
[edited by: tedster at 3:46 am (utc) on Mar 8, 2013]
| 8:31 pm on Mar 7, 2013 (gmt 0)|
WOW, now that's a question...
I would personally start with:
1.) User Behavior: Starting with patterns leading to the "slow drop". Have they changed wrt time on site, etc?
Is there something along the lines of previous availability of a product no longer available from the site? In other words: Stock level, back-ordered, discontinued. Even an order form not working (possibly only in certain browsers) that's causing people to keep searching rather than buying and ending the search?
Basically, actual behavior and any onsite changes that could cause a visitor to "search again after visiting" in order to find something they used to get from the site.
Or, in other words, is there anything to indicate "the search does not end" on the site(s) as often as it used to?
2.) Inbound Links: Previous link building efforts that have stopped (think along the lines of growth rate decline or churn rate turning negative.)
I'd also consider other seemingly innocuous or even "good causes" of this type of change in inbound links, such as disavowing links or asking for links to be removed, both of which would likely have a negative impact on growth rate and increase churn rate to the negative side, and would likely be "picked up" or "noticed" by Google over time rather than immediately.
3.) Template/Internal Structure Change: Was there any type of change along those lines that could be "making it's way through the system" and having a "slow impact" as more of the changes are processed and "understood as a whole relating to the site" over time?
[edited by: TheOptimizationIdiot at 8:37 pm (utc) on Mar 7, 2013]
|Martin Ice Web|
| 8:36 pm on Mar 7, 2013 (gmt 0)|
@tedster, i saw this pattern from summer to late autumn last year. Since then i managed to get my site back to many page #1 positions ( with throwback certainly ). In some cases i outrank the big brands again. I worked hard and the main think i did is "siloing" and cut my page down to its real topic. The pages were very slowly climbing even without panda updates.
@TOI, yes, almost smiliar in mind :). I almost thought that you are a google worker cause the way you defend the serps.
| 8:38 pm on Mar 7, 2013 (gmt 0)|
|@TOI, yes, almost smiliar in mind :). I almost thought that you are a google worker cause the way you defend the serps. |
LOL Nope. I just try to look at the bigger picture and realize how tough it is for them to do what they do. Helps keep me sane (or relatively sane) lol
I mean really, a trillion+ pages and a trillion+ searches a year? They're gonna miss a few, which relative to the numbers they work with means if they're right on 99% of the time (for 990,000,000,000 queries) they miss on 10,000,000,000 queries.
Yes, that means 10 billion queries are off if they have 99% accuracy and here we are complaining about a few results. Kinda makes me chuckle a bit sometimes when I think about the near insanity of people thinking every result set should be "right on every single time" and "Google's broken" if that's not how the results are right now today, because even with 99.9% accuracy, they're "missing it" for 1 billion queries.
[edited by: TheOptimizationIdiot at 9:01 pm (utc) on Mar 7, 2013]