homepage Welcome to WebmasterWorld Guest from 54.225.24.227
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 105 message thread spans 4 pages: < < 105 ( 1 [2] 3 4 > >     
Filters exist - the Sandbox doesn't. How to build Trust.
Understanding factors that restore and maintain results
Whitey




msg:3110530
 4:30 am on Oct 6, 2006 (gmt 0)

A lot of discussions tend to focus on the generality of a "Sandbox", but it has long since been debunked as a useful term, by Matt Cutts and many senior forum members. So i propose the Sandbox is dead :)

What does exist, are filters.

What opposes those filters are good techniques and "trust" - one good member recently referred me to it as "Trustrank".

An understanding of what these main filters are for, how Google applies them and the observed behaviour of Google in releasing them would be a good way for owners to better manage and refine their organic search techniques.

Maybe our good friends in the community could select a topic or several, that they have some solid experience and authority in and support it with a format that can be easily referenced. The most recent one has been largely contributed to by g1smd. Allow me to paraphrase [ and please correct me ] an example of how i think this would flow:

Duplicate Content Filter - incorrect linking

Applied: when internal links are incorrectly applied to "/index.htm" , "/default.htm" when they should all point to "/"

Effect: Unlikely to be indexed, badly suppressed results , PR applied to wrong or duplicate pages.

Time to restore : 2-3 months from when fix is applied

Evidence: WebmasterWorld webmaster reports

Duplicate Content Filter - Meta Data

Applied: when meta descriptions and titles are too similar

Effect: results show supplemental and generally suppressed

Time to restore : A matter of days according to the next few crawls

How many other filters have you observed, what are their effects , what have you done to fix the problem and what have you seen is the time to restore them?

 

steveb




msg:3111537
 9:36 pm on Oct 6, 2006 (gmt 0)

"Sandbox is a loose word created to cover all the unknowns relating to many types of filters."

No it isn't. Do we really need another silly thread redefining that stupid word?

The sandbox is a new site filter, the details of which aren't important here.

The sandbox hasn't the slightest thing to do with this other stuff. Jeez, it's like blaming Nixon for everything.

glengara




msg:3111544
 9:44 pm on Oct 6, 2006 (gmt 0)

IMO, "trust" is solely based on outgoing links.

walkman




msg:3111561
 10:03 pm on Oct 6, 2006 (gmt 0)

>> The sandbox is a new site filter, the details of which aren't important here.

since you're so sure, prove it! Not even google uses that word. Maybe new sites get caught in the "filters" much more than others making this seems as a new site thing; I know it effects OLD sites as well. Sites that have been online 1997 too.

ashear




msg:3111573
 10:08 pm on Oct 6, 2006 (gmt 0)

This post sounds familiar. ;-)

Whitey




msg:3111591
 10:28 pm on Oct 6, 2006 (gmt 0)

The sandbox is a new site filter

Agreed here:

Originally, it was intended to describe sites that didn't rank in the results on release

The definition's old hat - agreed. But filter effects still exist.

What we need are facts that folks can use to determin the filters being applied to their sites, with a technical and validation foundation, irrespective of their lifecycle of their introduction point [ new or mature sites ].

Filters on new sites may be one, some or many may reoccur in the life of a website.

I don't mean to be focusing or arguing on "Words" "Titles" or "Badges" - sorry if it seems that way. Just how to identify and fix the filters - this is my focus - I think folks need to know this.

Any help and involvement would be gratefully received :)

This post sounds familiar. ;-)

Let's break the habit of arguments and lack of substantiated comments [ i mean it nicely ], and focus on the filters.

Put your hands up if you are willing to do some tests on some these filters over at [webmasterworld.com...]

and hopefully the result can be applied back here

May i suggest anyone with test results could greatly contribute to your fellow WebmasterWorld member colleagues.

And may you become an esteemed member , much loved by your colleagues for ever :)

[edited by: Whitey at 10:55 pm (utc) on Oct. 6, 2006]

annej




msg:3111723
 1:53 am on Oct 7, 2006 (gmt 0)

IMO, "trust" is solely based on outgoing links

This is another interesting idea that is new to me. I'd like to hear from others about this one as well as the one that cheap hosting could trigger a filter.


walkman




msg:3111734
 2:18 am on Oct 7, 2006 (gmt 0)

>> IMO, "trust" is solely based on outgoing links

so if I link to nasa, MIT, google, Yahoo, the whitehouse, the Senate, I will get good trust? I don't think it is so easy :)

annej




msg:3111757
 2:43 am on Oct 7, 2006 (gmt 0)

I agree but wonder if it could be a factor if found along with other trustworthy indicators.

steveb




msg:3111774
 3:02 am on Oct 7, 2006 (gmt 0)

"since you're so sure, prove it!"

Snore, same goofy nonsense years later.

That stupid term has been a pox on webmaster forums since it "won" and became a armwaving term here.

Google has lots of filters. One makes it easier to rank pages on established domains than new ones. They got a bunch of other ones too. Get over it.

oldpro




msg:3111807
 4:38 am on Oct 7, 2006 (gmt 0)


if my memory serves me correctly the term "in the sandbox" originally applied to any site (established or new) that dropped in rankings or from the index altogether. the term has snowballed into a vague catch all phrase that has lost any value of definition. it is time to hammer the final nail on its coffin.

i have never had a new site that landed in the "sandbox". on the other hand i have experienced drop in rank with established sites due to changes in the algo.

yes google has filters as does yahoo and msn, but google did not invent the sandbox filter...we did.

Moncao




msg:3111886
 7:47 am on Oct 7, 2006 (gmt 0)

SEOPTI, you said "Next possible filter: excessive use of keywords in folder names Example: /great-cool-best-mega/thatsmine.html"

What about all the blog sites that automatically put the (long) title of a blog into the file name? I know you said "folder name" but wanted to see if you are being precise and sure of your facts. There are blogs that create their articles as folder names too, and I thought Google was so in love with blogs that it would do nothing to hurt any of them.

This, BTW, is what I see and think in comparison with forums and the Google's lost love for many of them. Certain forums which produce an individual html page for each posting / reply (rather than a threaded forum) suddenly are unloved by Google (unindexed and PR0), much like a link exchange page is treated. I am not talking about vastly spammed forums (which may attract another kind of filter), as many forums are, I am talking about well managed, spam free forums with unique titles, descriptions, etc. based on the post made by a real person, which used to quite rightly get indexed and appear for fairly long unique search inquiries, now being toast. I have such a forum myself and seen the same of other forums of this type. My domain is old and trusted and has good PR / rankings on its other (non message) pages also, so it seems Google have a filter that affects the individual messages on the site. FYI, I also only keep messages (archive them) for 12 months, as things change so old information no longer applies, so I don't have an ever growing message base. I perform good housekeeping, e.g. when I get rid of archives (archived namedate.html for the month index and /namedate/* for the messages on that page - these therefore both stay the same for their 12 month archive), I put a "gone" for those in the htaccess file. I even have a custom "page not found" page.

I am convinced Google filters out this forum style's individual html messages despite everything I have done and do, so now I am looking into replacing / changing the forum scripts with something that does "threaded" discussions like this webmasterworld; BTW, it less appropriate for my forum's subjects to have threaded messages as the threads tend to be very short, but the number of new messages (comparatively / percentage wise) are much greater than say here. I am annoyed because I am, I feel, being forced to change something that works well and is wanted by the users (I set up a separate Invision threaded forum and blog a few months back before this happened and asked registered users of my forums which they preferred, the existing or new, and got a resounding "old") but now find myself having to change and inflict this on the users because Google has cheated on forums and run off with some young blog! Anyway, my anger aside, I am certain the messages are no longer indexed and / or given PR is because of a filter, despite the fact my forums date back to 1998.

toothake




msg:3111898
 8:13 am on Oct 7, 2006 (gmt 0)

"Blackhats knew about "trust" long before anyone else. "
100% right :plus they keep expanding there sites in big money terms with success.
another issue now about the main topic:
Someone mentioned before that trust is based upon links:Yes and no ,what kind of links? links from BBC or CNN?or links from a million of junk sites?
You can build a small travel site that can rank at #1-#3 for 1 keyword (place name)and even rank over wikicr$%^p)
You just need a few quality links from extremely relative sites and of cource more then 1000 words for every page with original content ,a few outbound one way links to other sites with relative content that will help your audience and there you are.
Sites that I've made a year ago rank since last January at the top.Certainly looks like my formula works and sandbox applies only to sites that trigger a red light.

toothake




msg:3111901
 8:20 am on Oct 7, 2006 (gmt 0)

"a few outbound one way links to other sites with relative content that will help your audience"
My personal advise ,do not affraid to give one way links to relative sites that help more the end user ,do not affraid that you will spread your PR to competitors ,I suggest you should check the site of our fellow member europeforvisitors and you will understand why he is succesful.

Erku




msg:3112214
 4:53 pm on Oct 7, 2006 (gmt 0)

"Duplicate Content Filter - Meta Data
Applied: when meta descriptions and titles are too similar "

What is the evidence of it?

If the Page title and the meta data are too similar that's bad?

g1smd




msg:3112224
 5:02 pm on Oct 7, 2006 (gmt 0)

>> What is the evidence of it? <<

That, over there [threadwatch.org].

Erku




msg:3112257
 5:17 pm on Oct 7, 2006 (gmt 0)

"He suggested no more than 5000 pages per week"

What about the large news sites such as CNN and BBC that may do more than 5000 per week.

Also what about the user generated social network sites,that may do more than 5000 per week?

walkman




msg:3112278
 5:23 pm on Oct 7, 2006 (gmt 0)

>> What about the large news sites such as CNN and BBC that may do more than 5000 per week.
Also what about the user generated social network sites,that may do more than 5000 per week?

they have been doing that many years so they are addign with the same rate of increase (more or less). Problems arise when all of the sudden your 50 page non-commercial site adds some 127,000 new commercial ones overnight.

Erku




msg:3112281
 5:27 pm on Oct 7, 2006 (gmt 0)

It talks more about meta description. how about if one puts the title as the meta keyword?

BigDave




msg:3112398
 8:06 pm on Oct 7, 2006 (gmt 0)

Changing anything too drastically will cause the search engines to look at you differently, and that includes your rate of change.

There is a fair amount of slack allowed, such as a page a day blogger having some days of 20 pages, but 200 pages becomes suspicious. A site that adds 100 pages a day would have no problems with a day that jumps to 200. Because that is still within a range that would not seems suspicious.

You can even turn your page a week site into a 200 page a day site with a few months of working your way up. But you also need to understand that frequency of site updates will also be considered when categorizing your site. I'm not saying that it will be better or worse, just different.

annej




msg:3112496
 10:22 pm on Oct 7, 2006 (gmt 0)

In an earlier discussion on this topic someone suggested you build trust by regularly adding to your site. A huge site that just pops up out there and then never changes is probably less trustworthy.

I don't know if this is something we have evidence of or if it's just speculation. Hopefully someone can clarify this.

europeforvisitors




msg:3112505
 10:31 pm on Oct 7, 2006 (gmt 0)

In an earlier discussion on this topic someone suggested you build trust by regularly adding to your site. A huge site that just pops up out there and then never changes is probably less trustworthy.

I'm skeptical of that, because some highly trustworthy sites wouldn't be expected to change very often (e.g., the collected sonnets of William Shakespeare or lists of Catholic saints).

walkman




msg:3112509
 10:39 pm on Oct 7, 2006 (gmt 0)

maybe not directly, but assuming that the content is good, you'd get more links (to your new content, or people notice you more as a hub) as you add more content thus your trustrank increases.

jd01




msg:3112546
 11:37 pm on Oct 7, 2006 (gmt 0)

Do we all have to use the same word(s)?
I code a little, maybe I can clear things up:

$SANDBOX === $FILTER(s)

Hope this helps.

Whitey




msg:3112932
 10:39 am on Oct 8, 2006 (gmt 0)

"He suggested no more than 5000 pages per week"
What about the large news sites such as CNN and BBC that may do more than 5000 per week

Well known and very useful information on Google .... they don't fit with Google's profile of potential SPAM sites, and if sites like this fall into a filter - they'll have someone permit them into the results'd think.

I believe Matt was referring to lesser profiled sites.

photopassjapan




msg:3113082
 3:28 pm on Oct 8, 2006 (gmt 0)

All right that was me who said you'd better build your site gradually. I don't have any proof of this apart of our own site as the example.

Here's the complete case study... "the story of launching our site" by photopassjapan. I beg for the mods to delete this is if its length or non-techie enthusiasm is more of a laughingstock than help.

The situation:

- We register the domain name. Site isn't ready thus kept offline.
- Google says so that you shouldn't upload something that isn't complete, right? :P
- Domain gets spidered through a #*$!raper site...
- You can imagine. If there's an absolute 0 for trustrank i think this is it.
- U.S. server, with a single admin page under our domain that self-advertises the software which the host is using as the web interface.

...

- Under these circumstances... we upload a site... with 7000+ static htmls.
- With duplicate problems. Unique content, unique title on all pages... these are album pages for photos. However meta description is only unique to sections, not each and every HTML file.
- And finally... at first we even mentioned ( why not, although we knew we weren't close to be serving this quality )... we mentioned a word that would indicate we would be willing to sell/license. ( we don't do, but thought if someone would want us to, why not. Stupid move. )

----
Let me note that at this point we knew nothing about the fact that we were spidered, that we were uploading a site with potentially dupe content problems. The only gut feeling we had was "Why pose as if the *** was for sale, when they're not, really? Just for that one in a million chance? What are we, artists?"
----

Result:

- Site got indexed within two weeks gradually.
- It was in the index, but didn't come up for relevant searches.
- Okay, that the "primary index" thing doesn't list us, but the "secondary"? I mean for a completely random term that is nowhere else on the net but on our site, and our pages indexed, it still wouldn't show... so...
- Site has no Gtrust... nada... nil... shinyou zero. :(
- It's liked by people who knew about it... for its content but...
- Looking back, we could only get it known by means G won't recognize as indication of trust. Like members only communities, friends, interested people, word of mouth. A few backlinks from here and there, and we were very happy :)
- But... even with adding backlinks, nothing happened.
- We figured, and now i KNOW that the BLs were there, but held back. Not from displaying in the link: operator, but from functioning as a vote for us. We figured... and now this seems to be the case... they were held back for a definite time based on the reasons why G didn't trust our domain.

Once we knew something was wrong:

- Tried to contact Google to at least neutralize its stance.
- Nothing... heh. Would you have guessed?
- Asked around what to do about this PR0 thing. I mean we didn't call it "lack of trust", but having a decided PR0 pre-launch and a cached irrelevant admin page certainly indicated something may be wrong by now :P
- Nothing... either no one had this kind of problem, or sitting it out wasn't enough of a deal to be whining about it. I regret that we did. I mean whine.

- Removed all references, words that we felt discontent with from the start. We thought that if they were there, the site would need to be competing with industries we don't belong to, yet which are heavily over represented on the net. Not a chance to be seen for what we really would be liked to seen for... ever.

- Came here and asked around what the causes... or the "other" causes for ranking so bad could be. I mean we're doing something, and in a way no one really did before on the net... a narrow but rather unique way. All pages indexed. What's wrong?
- Duplicate issues, and canonical issues came up.
- Fixed them... ( thanks g1smd again )
- Didn't have high hopes anymore to a single such parameter being taken care of but... they're all necessary to deal with anyway.

So, the cause and effect theory, chronologically:

- Site gets spidered through #*$!raper. ( May )
- Uploading the first version of the site ( 12th of June )
- Noticed admin page in the G cache, PR0. ( same day )
- People visit the site from everywhere but G, and they like it
- Some links already point to it, relevant and trusted alike ( June )
- No BL update, no PR update throughout June, July, Aug... not for us.
- Adding new pages we would have liked to anyways...
- Being intrigued, we check why we don't deserve an update. And notice the first thing on this list in the logs. Our first visitor, and the referring source.
- Ask around what could be done ( July, Aug )
- Removing all references, words, phrases we don't like, don't feel to describe us well ( or make us look like a commercial site / competition to a commercial site... well, whatever. )
- Thus uploading all 7000+ html pages again ;)
- They get indexed =)
- Nothing changes :I
- Adding new pages we would have liked to anyways...
- Adding a sitemap to Google SiteMaps and even Yahoo.
- But nothing really.
- Site is getting a dozen more people. Wow. ;)
- Coming here and ask around. ( Sept )
- Dupe content problem mentioned. Not yet fixed!
- NO other problems. Except of course the facts already mentioned.
- Before it got fixed, we noticed that the first BLs start to show up for link: ( 12th of Sept... exactly 3 months after the site was uploaded... is this a coincidence? I don't think so. )
- On the 15th of Sept everyone had something to say on here thou.
- Almost 50 people from G a day -.-
- Getting rid of phpBB because no one visits the site anyway.
- More importantly its pages were all over our results... as opposed to our content... gnn... so off it goes. First disallow in robots.txt ( nothing happened, not even supplemental ) then DELETE button ( ok, now it's supplemental ) with a new index and a redirect. Even later everything deleted. But the 404 page is customised and looks real nice. 404 not 301. ( historic supplemental, i.e. invisible )
- Duplicate content issue fixed gradually.
- Canonical issues discussed... those get fixed right away.
- Pages that are indexed with their new unique stupid meta description... which seems to count more than the 3-4 months spent to make the site and all pages unique... start making them pages show up individually for the site: operator.
- :)
- Indexing becomes more and more dependant on site structure... before it was as if GBot was reading it in alphabetical order...
- SERPs start picking us up for terms our site is more relevant with than others, and which are not that $$$ competitive...which is nice, but still nowhere for most relevant things.
- PR update ... er... should I say TBPR update starting from 27th of Sept and still going on.
- TBPR is now 5.
- Some BLs are showing, some don't.
- We didn't notice any change in traffic as of yet, the real PR change must have been ( for us at least ) months ago. Still not that visible, but at least all searches are relevant! I mean some people who arrive from G look though ten, some look through a hundred photos... they seem to like the site.
- Since the meta description issue solved, we see more and more pages updated in the index by day. First it was only about 40, the subpages. Then slowly but steadily it rose to about a hundred. Within days it was 400.
- Pages we added after Aug do not have a PR. The rest is divided as per site structure.

Conclusion.

There was no SINGLE thing that got us out of trouble.
The sandbox effect in my opinion consists of as many parameters as there are factors in ranking... and is fading away gradually, not in a single turn. Thus it's not a filter, rather filters multiplying each other's effect. At least that's what i think.

Relevancy issues, trust, backlinks, age of domain, dupe content issues... they all add up and multiply the length of not ranking at all.

Our experience was that there just wasn't an exact date when this stopped or won't be a date when this will stop being an issue for us... neither of the above mentioned great turns in changes visible to us brought a visible bang in traffic. Yet it has been slowly but steadily crawling up... or at least that's what i can tell by examining these very low numbers :P

The fact that exactly after the third month of the new domain root index being cached i saw BLs updated in the link: operator... the fact that there are 8 times as many pages indexed properly than of two weeks ago... the fact that we now have TBPR 5 not 0 for one and a half weeks now...

Neither date shows a turn in the stats.
But the stats do show an increase in general.

Filters(?) we might have been tripping

- Had the site not been new, the link from the #*$!raper ( only one at that time ) may have not hurt it. Don't ask, we know. Checked into this with other sites, older, newer... everything seems to be indicating this is right. It may be wrong thou.

- Had the site been online before it was spidered, it wasn't cached with a completely irrelevant page in its place for the VERY first time G has ever saw it.

- Why this is important: I think that if G spiders you with stuff that is competitive, AND irrelevant to the site, that's an instant flag. Either way... SiteMaps is the proof for me that G has at least a top 20 for each page it considers its advisory board for each URL. Regarding what it is relevant to, regardless of other sites, based only on its content.
- In our case it was either that the new index.html for the domain root, you know, the ONLY page indexed at that time... was replaced with an irrelevant page ( as in the originally spidered "admin panel" was not filtered out, but when it was replaced with another, 0% relevant page, THAT triggered something... regardless that the new page was meant to be the one and only )... or that the page indexed on our domain as the first impression was irrelevant already, compared to the domain name and even the #*$!raper link.

- If the site was not so low on trust, adding the 7000 pages may still have been too much. I don't know.

- If you have unintentionally competitive keywords and phrases on the site, you'll be considered as a "new contestant wannabe". Do a search on some very general phrases...

You'll see the SERPs being cut off at 300 results saying the rest is "very similar" with umpteen million results being filtered out leaving only a bunch of sites. The rest won't show up if you click the omitted results page. Indeed there is a big league. In which you won't compete until you graduate from the university team matches ;)

So there may be different thresholds for different key phrases.
Based on just how many sites of great trust, age, relevancy compete in the area. Or based on G's experiences on how competitive an area is, just by looking at AdWords data?

- Duplicate content may have hurt the indexing and the SERPs alike from the very start. The indexing is now visibly better. Traffic didn't see much of a change as of yet, but who knows, maybe this was an obstacle out of many.

Okay i think i added another meaningless, long post which i won't get any reactions to and feel bad about typing it for an entire hour :P

Lucky for me i have time on my side.
If you couldn't tell that from the looks of our web site, this is made for and by us, not anyone else, thus we don't have anyone but ourselves to report to.

:)

[edited by: tedster at 4:03 pm (utc) on Oct. 8, 2006]

BigDave




msg:3113161
 4:52 pm on Oct 8, 2006 (gmt 0)

photopassjapan, I think your problem was simly that your site was new.

Everyone seems to expect instant ranking of their site, and it sometimes happens, but was never guaranteed, not even pre-"sandbox".

microlinx




msg:3113196
 5:24 pm on Oct 8, 2006 (gmt 0)

Here's a odd filter...maybe it should be called the "Dufus Duplicate Filter":
I was editing my home page, so naturally before I started I copied and pasted my index.html page which MSFP named copy_index(1).html
I mistakenly puiblished the site with the dup in place.

Well, within 24 hours, one of my top listings for that page was gone!
It was an honest mistake, but would googlebot actually find this and consider it duplicate content then slam me that fast?

My site is about 7 years old. My top competitor uses the same keywords and description for every page of his site and has a rock solid position and seems like he can do no wrong.

Does google use certain sites as a keystone? or reference point?

To make this stranger yet, if I search using the Google Dance tool, I still show the index page as #1 across three data centers. Searching Google however shows nothing. Can someone explain why?

photopassjapan




msg:3113238
 6:03 pm on Oct 8, 2006 (gmt 0)

My site was... new...
Okay, thanks.
Problem solved then i guess. ;)

:P

You may be right, anyway i don't know.
This thread isn't about my site but the filters.

If you mean this the way that my site had a major factor overruling all else, i.e. the site being new makes our experiences with the filters relatively useless... that's up for question. May be so, i have no idea, but comparing to my previous experiences, i'm not sure if this is the case.

It may look like it though, but...
this isn't my first ever site. ;)

I had other sites, all of which i built to the best of my knowledge, just like this. And never had this happening. The last one was made in 2004 so the only thing in your comment i know i can't agree with is the "pre-sandbox" remark, the rest may very well be so. There were some major changes since then which i guess i should have known about, but i'm not a webmaster. I come here for advice actually :P

The sites never took more than about 40 days to catch up to their average potential before.

All were personal so always in different areas. One even in an area waaaaay worse regarding competition. Only one though, for altogether i like to make sites that offer something people would like to see/know/make use of, but can't find it on the net. Can't stand competition for i don't have the budget, nerves and proper off-site SEO knowledge apart of... common sense.

...

That's why i know that if you build a good site, with no errors ( webmastering or seo errors that is ), it does rank well within a month or two. And not three to six. Well, of course not for the competitive searches, but for those which it's unique to. The rest... i mean how well it ranks for big time keyphrases, well yeah. It may be more difficult now, more than ever. But i wasn't really aiming for that.

My point was that it may be that our "trustrank", or the 7000+ pages all at once, or the duplicate content issues... or more precisely all this together... kept and is still somewhat keeping us from being seen. Not for competitive searches, but for the rest. Took care all of them though, one by one.

You know we're talking about Google here, on MSN and Yahoo it does rank better. Even some very neat keywords became top 20. Only that no one uses them on MSN or Yahoo :P
Even we checked only because we didn't get what was wrong.

There are things i didn't count for, for i didn't know about them. We built these pages in accordance of past experiences which worked well before... and out of what i learned since its launch, these few but fearsome things could have been the "filters" resulting in a much slower progress in going public.

I don't know. These are all "maybe"s, as every single theory on filters are, right? This is just our experiences, summed up.

But if the only problem was that it was a new site, well then...

- You can add 7000+ pages at once
- Can be found for relevant searches with a TBPR of 0
- Can get links from others count as a vote for relevancy, with zero trust from Gtowards you
- ...Unless... it's a startup. Startups can't.

Or...( sounds more accurate to me ):

- You can add 7000+ pages at once and will be subject to attention by the algo that filters spam based on relevancy, trust, link structure, whatnot.
- Can be found for relevant searches with a TBPR of 0 for it doesn't mean it's accurate as your PR, and is only a single factor, perhaps the other factors, your inbound links have already promoted you as the most trustworthy TBPR 0 page for relevant keyphrases... bypassing less relevant, less trustworthy but TBPR 19+ sites.
- Can get links from others count as a vote for relevancy, even with zero trust from G towards you, becasue that's how you regain the trust...
- ...BUT if you're a new site, you have a huge handicap at all of these. Which may result in a longer period of evaluating the new pages than on a well established site.

I think we can agree on the second being closer to what we experience.
We may not know for sure though.

I guess you meant our BIGGEST problem was that it was a new site. Yeah, and i state it so in my previous post... but... our next biggest were probably... in no particular order... too many pages out of the blue, competitive wording, tbpr0 because only a single low trust link pointed to us when spidered for the first time, a sudden change in relevant words when that page was overwritten with the actual site, and dupe content. These i believe multiply each other's effect.

My previously launched sites didn't have these problems:

You can't get backlinks to vote for you if you have no trust. ( You can but it's slower )
You can't get trust without backlinks. ( You can but it's slower )
You can't get relevant with no trust and no backlinks. ( You can but it's slower )
Your data won't be updated as often as others' inc. trust, BLs, PR... and so on.

I'm not an Alice Cooper fan but this sounds familiar ;)

microlinx




msg:3113390
 9:57 pm on Oct 8, 2006 (gmt 0)

I'm going to ask this one again:

How is it possible for three Google data centers (using a Google Dance tool) to show my site in the #1 position for a specific search term and yet using my local PC google search I don't show up at all?

g1smd




msg:3113417
 10:22 pm on Oct 8, 2006 (gmt 0)

Try accessing the results using the gfe-xx.google.com name rather than using the (sort-of, but not quite) equivalent IP address.

Lorel




msg:3113420
 10:25 pm on Oct 8, 2006 (gmt 0)

Tedster said:


Another proposed filter: excessive use of a keyword in internal anchor text.
Effect: domain is depressed on searches for that keyword.
Time to restore: soon after the next few crawls (the first time). But if there is a second "offense", then only a more gradual, stepwise recovery.

I agree with this. I wasn't ranking for a major word on my site for years (not above 1000 in Google). I had apparently used the keyword too many times in internal anchor text. Recently I've been so busy that I purposely destroyed my Keyword Density for that term so my rank would go down and I can have some time off. It brought my rank up to #35 for that term within a few weeks--which I didn't want. But it does show it was overoptimized because removing those extra keyword links brought the rank up.

This 105 message thread spans 4 pages: < < 105 ( 1 [2] 3 4 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved