|I also believe that some industries (keyword sectors) may have a longer duration until a new site will see rankings. |
Absolutely. I have long believed that Google has different algorithms for different search terms. Quite logical really, as well as useful for keeping us all confused.
I have another idea. Maybe it's the same algorithm, but the basic data in each niche or taxonomy category is inherently different. That could mean the algorithm's output naturally seems to adapt for the competing sites within that particular niche.
I'm pretty this "data difference" must be at least part of the picture. For example, the historical data in most competitive niches is going to look a lot different than the data in relatively low competition areas.
If I were Google, I would prefer a single self-adapting algorithm over trying to maintain and tweak scores of algorithms.
Have you ever considered the fact that google is tracking user activity on your site, through google analytics, google chrome, google toolbar, clicks from the SERPs, etc., and they need "time" in order to develop a site history. Just like a/b testing, you need a certain number of visitors in order to have enough information about how visitors are interacting with a site in order to make decisions.
You also need enough visitors and clicks when testing ppc ad copy.
The fact that google needs time just gives me more proof that it is related to visitor activity, not the number of links a site has.
I need time to decide whether a new site of mine is a success - usually I'll give it a year. Google might figure it out faster than I would, but probably not by much.
(Note I said whether it was a success - meaning user engagement, links, social sharing, etc. I did not say whether it was a good site, because I *always* know it's a good site)
|proof that it is related to visitor activity |
As far as I'm concerned, this should be an important part of SEO data collection (and response) for all of us. We certainly can monitor our own user activity and we should. In fact, we can do it more thoroughly than any search engine can!
I'm talking specifically about browser data on user interaction, such as:
1. How long did the page take to load for each particular user?
2. Did the user scroll the page? How far?
3. Did the cursor move or hover over a link without clicking?
There is lots of other useful data available from browsers, too. And if we're doing A/B testing for conversion optimization, then deciding what specific change to test is much less of a shot in the dark. The browser data gives us some major clues.
For one way to get started in this area of SEO data collection, check out the free script boomerang.js
Nope, I'm still seeing one of the spammiest sites ever holding onto position #5 for one of my keyphrases. This site is a single page with no information for a human, several empty pages, and lots and lots of Adsense all over the place. How Google hasn't detected that this site is crap that any human user will be irritated at getting sent to is beyond me. The domain name is the keyphrase, but the site doesn't even touch on answering the query. It's totally MFA.
Yes, Google may need lots of time to detect fantastic quality - but as you noticed, they still can be fooled pretty quickly.
I do think this is more likely in lower competition and longer tail queries. And I also think it happens more when they try to geog-personalize the SERPs, because that process seriously cuts down on the candidate URLs that they can choose between.
sandbox= Launch a site in a vertical that isn't crowded there isn't one.
sandbox= Launch a site in a crowded vertical there isn't one.
Ranking a new site has everything to do with the age of the nitch, money in the top rank positions, and amount of sites trying to rank.
It isn't a sanbox it is a competitive box. Your not going to launch a new site and rank for "Trail lawyer" anytime soon, yet you can launch a site and rank for Butter Cup Party fairly quick.
I also think the term "fantastic" is a pile of cow dung. What makes a site "fantistic" the eye of the viewer, Google or "site owner"? I can make a pee poor site look "fantastic" but it is in reality a pee poor site. I guess all the sites stealing content and ranking now are "fantastic" sites.
I have a pretty good grip on what it takes to rank a site in this new world order and it sure isn't "fantastic".
All the engines seem to agree that Ehow is "fantastic." I never trust their info, and I also hate how their site loads (ads that make it jump around while I'm trying to read). But if that's fantastic, I can sure see why sites that load in a non-disruptive manner and contain useful info might seem like crap. ;)
JohnMuGoogle Employee said;
|...That said, if you're engaging in techniques like comment spam, forum profile link-dropping, dropping links in unrelated articles, or just placing it on random websites, then those would be things I'd strongly recommend stopping and cleaning up if you can. |
So Mr. Mu made it now official that negative SEO is on Google a very rewarding and well working tactic these days...
|All the engines seem to agree that Ehow is "fantastic." I never trust their info, and I also hate how their site loads (ads that make it jump around while I'm trying to read). But if that's fantastic, I can sure see why sites that load in a non-disruptive manner and contain useful info might seem like crap. ;) |
All the engines agree because a ton of the users agree. I don't like it either, and I wince when my mom prints out something from one of these places as a reference (and she's smarter than the average 81 year old bear when it comes to pretty much anything) But it gives her the information she was looking for, no more, no less, and I think for a lot of users (who aren't us) good enough is ... good enough.
The problem with ehow isn't that it is crap ..it is that it crap made for ehow by human scrapers, and because it makes Google money via running adsense they don't just demote the thing out of sight in the serps, like they claim they were trying to do "scrapers" before and with Panda..
Ehow dropped for a few weeks and now it is right back up there..makes one wonder if one day Google might not just buy it, run it as their own "short words wikipedia" ( with adsense ) and then shut adsense to many of the smaller and medium publishers that Ehow ripped their content from..
|A true sandbox is a predefined time of limited functionality. Never existed and doesn't exist. |
I call it more appropriately a "sandbox effect" since it's not a real sandbox. Maybe the sandbox doesn't actually exist but from a black-box perspective, if you do something that invokes a series of algorithms that make it appear to someone that a sandbox exists, and it happens every time those same conditions trigger the same effect, then technically it exists from your POV. Changing the behavior that triggers those effects would obviously be the wise thing to do but some people just don't learn from their past mistakes and simply keep causing the sandbox effect over and over and over.
I've never had it happen to any of my sites or any of my past customers either but then again we never went wild submitting to junk directories, link buying binges, site cross-linking or other bad behaviors that tend to make the SE's unhappy.
I believe there is now more of a "reverse sandbox" effect happening, meaning that Google will now give a new site a chance to rank quickly but if it does not perform appropriatly then it goes into the "sandbox".
| This 44 message thread spans 2 pages: < < 44 ( 1  ) |