| 8:13 pm on Mar 17, 2011 (gmt 0)|
Yes, I think you've got the idea. Sites that curate, syndicate and organize content - even for a significant part of their total web publishing - are considered a good search result if they have achieved authority. The curation and organization is considered to be adding value.
| 10:43 pm on Mar 17, 2011 (gmt 0)|
I've given ScienceDaily permission to copy my content, and just ask them to use the Google syndication metatag and link attributes. As long as I get a link back as the original source, and the writer gets the writing credit, I'm happy. I've got a similar relationship with Discovery, io9, MSNBC, and others.
I'd prefer that I rank higher in the search engines than they do, since they're syndicating my content, but I don't obsess about it.
| 11:20 pm on Mar 17, 2011 (gmt 0)|
I bet they are on the White List.
| 12:38 am on Mar 18, 2011 (gmt 0)|
My bet says exceptions for this site and many others like it are accounted for in the algo itself. Something as simple as a certain threshold of backlinks from trusted source could do it
| 1:08 am on Mar 18, 2011 (gmt 0)|
Perhaps. I would still love to see the white list. It may cover .gov and .edu, but I suspect it is more extensive and includes the NY Times etc.
| 1:14 am on Mar 18, 2011 (gmt 0)|
My wife and I were joking the other day about using one of the spam generators to post hundreds or thousands of articles onto low quality directory sites with links pointing to our competitors. LOL It is probably more worth our while to spend time generating good content for our own site.
| 1:52 am on Mar 18, 2011 (gmt 0)|
From what Google said recently [webmasterworld.com], they create an "exception log" for individual parts of the total algorithm, not one "whitelist" that works across all areas. Bing said their process is parallel, too.
| 2:18 am on Mar 18, 2011 (gmt 0)|
Yes. Personally, I see it as semantics. An exception log sounds like a whitelist to me. Also, we have no idea what individual parts it is used for - and does that matter? I don't know.
I think it would be a mistake to not have an exception log / whitelist. Like I mentioned above, someone could use a spam generator to produce low-quality links to a particular page on Wikipedia or a government site. They could game the system and push their website up.
| 2:47 am on Mar 18, 2011 (gmt 0)|
As I understand it, the difference is a lot more than just semantics. Here's what I mean.
The complete Google algorithm is made of hundreds of smaller algorithm modules. Each module measures some specific area or signal. Some of those smaller modules may generate a false positive.
When such an event comes to Google's attention, they make an entry in the exception log for just that module - that part of the algorithm. Then they work to evolve that algortithm module, so that this type of false positive is no longer generated. At that point the site is removed from that particular exception log.
A whitelist would grant a full exemption for a site, no matter what behavior is detected by any module anywhere in the algorithm. No matter what signal is detected, that site gets a "get out of jail free" card.
Both Google and Bing say they don't do anything like that.
If we knew the exact details of the link quality module or modules, then we might know better how is easy it is to game it. But first, to even be in the exception log at all, a site would have generated a verified false positive AND that would only last as loing as the algo module wasn't changed. Otherwise the algo would just flag any real positive and flag the URL involved accordingly.
| 3:38 am on Mar 18, 2011 (gmt 0)|
So you are saying that the algorithm actually removes sites from the exception list automatically? I haven't seen anyone say that. Cutts made it sound like the lists were manually input.
|we do sometimes take manual action to deal with problems like malware and copyright infringement. Like other search engines (including Microsoft's Bing), we also use exception lists when specific algorithms inadvertently impact websites, and when we believe an exception list will significantly improve search quality. |
Either way, I want to be on as many white (or exception) lists as possible in their Algo. :)
| 3:42 am on Mar 18, 2011 (gmt 0)|
It's a natural programming step. Once an algorithm no longer flags a particular situation as a false positive, then that situation no longer needs to be handled as an exception to the algo.
To be on as many exception lists as possible, you would first need to generate a lot of false positives. That means a lot of problems until someone gets around to acknowledging that your site WAS hit with a false positive. I would not pray for that situation at all.
| 4:23 am on Mar 18, 2011 (gmt 0)|
I suspect you would have to be a pretty big site to be on an exception (white) list, so I might like that. :)
They are not going sit around and manually input sites that rank 55K or 100K in traffic. I also doubt they are going to sit around and manually input an exception list for a specific keyword for a specific site. I would expect the whitelist to be general in nature, even if added to one specific module in their program.
| 4:53 am on Mar 18, 2011 (gmt 0)|
I think that finding false positives is one the reasons the Google Webmaster Help forums exist, as well as reconsideration requests. Every false positive example the engineers can spot helps them improve the algorithm for many websites.
You are, naturally, thinking about this like a site owner - and that is not how Google builds their infrastructure. Their top goal by FAR is to please a maximum number of users. How site owners are treated is a sidelight, whether that's penalties or rewards.
Search engines do not run a ranking contest for sites, checking all entrants to make sure the contest rules are being followed. That's just not the Google mindset at all.
| 6:21 am on Mar 18, 2011 (gmt 0)|
|Search engines do not run a ranking contest for sites, checking all entrants to make sure the contest rules are being followed. That's just not the Google mindset at all. |
I am not sure what you mean. Search engines index the web, sort the results and display SERPs when called upon. I think we agree there.
They don't want some guy who is not an authority to outrank another guy just because they did some SEO and got links to their site. This is what Panda is about, IMO, AND this is where I think a whitelist would be helpful - at least to Google. But in some cases I don't think it will provide the best results.
As per being a site owner - sure I am. But if you read my post I try to get into the mind of the programmers. I listen to what Cutts and others say and combine that with how I would (or they) write the subroutines to sort the SERPS.
| 7:08 am on Mar 18, 2011 (gmt 0)|
|But if you read my post I try to get into the mind of the programmers. |
The easiest way to do that is sit down and try to do what they do ... IMO once you do you'll understand there is a definite distinction between an 'exception' list and a 'white' list, until then you probably won't choose to accept the answer they or I give.
You say you would like to be on every 'exception list' you can? The one case I know of explicitly (noted by tedster) was for a site owner who's site was 'filtered' for having too much white space at the top of the page.
Now he's on an 'exception list' for that filter. Lucky, right?
Only if you think a manual review of the site every time it trips the filter is good luck ... The site is no longer 'filtered' automatically by the piece of the algo doing that specific evaluation, but it doesn't mean the site gets a pass on the 'outbound links to spammy sites' portion of the algo, or a pass on the 'thin content' portion of the algo ... It's a specific piece that no longer filters the site from the results, and the site gets manually reviewed instead, which means it's continued rankings are subject to the possibilities of: a 'whim', a 'reviewer having a bad day', a 'competitor doing the reviewing', a 'general dislike for the design', a 'dislike for the topic' and all the other 'personal nuances' that go into manual evaluation of a site IN ADDITION TO the possibility of a reviewer noticing something another piece of the algo missed and manually penalizing the site ... I can about guarantee you not everyone likes every site and to want to be subject to a manual review where 'personality' rather than mathematics and patterns are the decision makers doesn't sound like the situation I would want to be in.
| 7:25 am on Mar 18, 2011 (gmt 0)|
But the interesting part is that any time we have these static overrides, we will make sure that the next iteration of the algorithm actually handles these lists. So these lists are constantly evolving.
It is not like ABC.com or any specificsite.com is always be on the whitelist or always going to be on the blacklist. It just like evolves basically. Because we do have a manual slash algorithmic approach to that.
Cutts: Yea [agreeing with the preceding statement], it is also important to realize that there are many many algorithms, maybe the majority of the algorithms that donít have any exception logs. For example, Panda, we donít have, there is not way right now, to do any manual exceptions.
|Like other search engines (including Microsoft's Bing), we also use exception lists when specific algorithms inadvertently impact websites, and when we believe an exception list will significantly improve search quality. We don't keep a master list protecting certain sites from all changes to our algorithms. |
[edited by: TheMadScientist at 7:28 am (utc) on Mar 18, 2011]
| 7:28 am on Mar 18, 2011 (gmt 0)|
|The one case I know of explicitly (noted by tedster) was for a site owner who's site was 'filtered' for having too much white space at the top of the page. |
It sounds like you are talking about a white list that is generated by the algorithm. That is not what Cutts was talking about. He mentioned manual input.
| 7:31 am on Mar 18, 2011 (gmt 0)|
You need to read around a bit more ... What I'm referring to in my first post in this thread is John Mu manually adding an exception for a site being filtered by a specific piece of the algorithm ... You obviously haven't done much research or put too much thought into the idea objectively to try and determine what it is they actually do and why, otherwise you would probably see what the engineers and the tedsters and I are trying to tell people more clearly and that is: An 'exception list' or 'white list for a specific algorithm' is NOT a 'white list' that allows a 'blanket pass'.
[edited by: TheMadScientist at 7:32 am (utc) on Mar 18, 2011]
| 7:31 am on Mar 18, 2011 (gmt 0)|
Mad Scientist, I read your next post. He mentioned no white list for Panda "right now".
| 7:35 am on Mar 18, 2011 (gmt 0)|
He has to qualify every statement, even if they don't ever intend to create one, because if he doesn't and they do 5 years from now people are going to reference his statements and say non-sense about how Google deceives webmasters ... See Cutts said there was no whitelist or exception list for Panda, but then they went and created one ... HE HAS TO QUALIFY EVERY STATEMENT!
He doesn't say: But we're working on it OR We'll have one soon OR We think it's something we're going to do either. He made an absolutely true statement: There is not one right now.
| 7:41 am on Mar 18, 2011 (gmt 0)|
His statement about there not being one for Panda actually totally defeats the idea of a general 'white list' because if there was one, they would have a way to add exceptions to Panda, so the fact they don't means there can't be a blanket white list, otherwise there would be a way to make an exception to the Panda update...
| 7:43 am on Mar 18, 2011 (gmt 0)|
I agree with your last statement (actually it was the statement above your last). I think they will need a whitelist of some sort for Panda too. He qualified the statement like anyone else would.
As per Google deceiving webmasters - I don't think so. I think they have been very open about their algorithm.
A whitelist (exception list or whatever) has nothing to do with webmaster - it is to help provide the best SERPS.
What their last two iterations have done basically is create a black list or gray list (virtual via alsorithm). Here again, it is to provide the best SERPS. Sure there are some glitches, but an exception list could help solve that problem.
[edited by: Dan01 at 7:56 am (utc) on Mar 18, 2011]
| 7:45 am on Mar 18, 2011 (gmt 0)|
|His statement about there not being one for Panda actually totally defeats the idea of a general 'white list' |
I agree, there is no general white list.
| 7:49 am on Mar 18, 2011 (gmt 0)|
Wow! You're reasonable ... COOL!
| 8:00 am on Mar 18, 2011 (gmt 0)|
Thanks for responding to my posts. The goal is to find the truth. Although I just signed up today, I have been reading your and Teds posts for a while now, and think it is one of the better forums, at least for me.
| 8:32 am on Mar 18, 2011 (gmt 0)|
Cool Dan01 & Np ... Glad you joined and decided to post ... This is acutally the only forum I bother with ... Over the years I've thought I needed to expand my horizons a bit and tried some others, but there always seems to be too much noise ... Glad you're looking to understand what they really do and why ... I think that attitude will definitely help in make your sites successful.
| 9:10 am on Mar 18, 2011 (gmt 0)|
I have posted on several forums for years, but there was little feedback. This forum seems pretty busy. LOL
As per SEO, I try not to spend too much time thinking about it and more time trying to come up with better content.
As per programming an SE. True, I have never sat down and wrote the code for one. LOL But I have been programing since the 70's, beginning with Basic, Pascal and working forward... HTML PHP. But lately, content has been my focus.
| 10:01 am on Mar 18, 2011 (gmt 0)|
Well, if you've got some programming experience, don't worry about actually writing a search engine, just think about how to start processing pages if you wanted to ... Think about what you have to do to find what's important on a page and try to extract or differentiate the template from the content ... Don't even think about the code it would take to do it, just think about the process you would have to go through and what could be scored ... My guess is you'll have a better idea of exactly how things work, because the biggest difference in deciding what to do and how to score things is the code to do it ... If you can think of another way to get a better idea of what a page is about or how it applies to a couple of words someone searches for they've probably already built that into their system over the last decade or whatever they've been writing massive amounts of code for ... All they do is think and code ... Think about how much code you could write by yourself in a year, let alone if you had teams to write it and a decade to develop it.
Okay, think about the code a little bit, because that's how you get to 'multiple' algorithms ... You have to have separate processes to do certain things.
|brotherhood of LAN|
| 10:05 am on Mar 18, 2011 (gmt 0)|
|But lately, content has been my focus. |
That's the best way IMO. Programming is a helpful tool, but ultimately your end goal is to have pages your users can view, find useful, and enjoy. Search engines don't care how a site was made, either in notepad, with PHP, dreamweaver... their concern is the same as your users.
| This 31 message thread spans 2 pages: 31 (  2 ) > > |