Welcome to WebmasterWorld Guest from 18.104.22.168
Forum Moderators: open
My observation so far: little change this month from last. Anchor text of inbound links still counts big time, and PR seems to be worth the same as before. IOW, its the same old, same old. One aspect that isn't relevant with the SERPs I am most familiar with is "spamminess". I don't see much more spam, but then these SERPs don't tend to be the ones that spammers would be found on. Thus, the index may be more spammy, and I wouldn't see it.
Good luck everyone.
I counted 6 sites that had hidden links and 1 site that had hidden text keywords. I found 2 sites that are mirrors of each other.
I am not sure what the difficulty is in finding hidden objects and filtering them in the algorithm. I guess duplicated content that is not linked may be more difficult.
I have a possible explanation for this. One of the mods here pointed out in this index Google has a lot more total pages than the last one. If these are things like bulletin board pages, etc. which would have low PR this might tend to drag down the PR of ALL higher PR pages. Thus, no relative advantage or disadvantage to those. However, the link: command only shows PR4 or higher pages. Thus, total backlinks may be the same, but the link: command shows less because some fell beneath PR4.
Seems the keyword density filters are less permissive as the pages where i increased the density last month as they don't show anymore for the terms they are targetting.
So less internal reciprocal links, i guess use a very strict hierarchical structure, and lower keyword density
ps. I don't mean to whine but these are the only results i know enough about to post on
The site that lost so much has been around since 1996 and I wonder if something has changed in regards to old sites or old links. I remember Brett wondering if old sites had an advantage. Perhaps this is correcting for that. The lost backlinks doesnít seem to have hurt the site in the serps.
At first I thought the difference was less internal back links showing but that doesnít seem to be the case in the newer sites.
I also notice more amazon and kmart item pages, etc... very deep links surfacing to higher rankings. Maybe PR is more important?
And I am seeing some very agressive spam in my SERP's, like I've never seen before, and they are very sucessful at it, it seems. 10+ domains, interlinking, etc... I call it spam cause some of these domains have 8 keywords in them and super-micro text links.
Oh well :) Google has to modify this algo sooner or later.
Makes it seem so 'official'...hm, seriously, I'm not going to bother checking or looking at anything for a few days.
No need to rush, the update just started - as many have said, things change.
Google traffic up 20% so far on the few sites I bothered to check.
Go Google. :)
Does anybody know what a 'pagerank' is? And how to calculate one?
Is it the normalized eigenvector of the link matrix of the web, or something else...and if the normalization is skewed to one degree or another, say, favoring sites of a certain 'brand X widget' then perhaps that would be the 'snippet' of analysis I should share?
No whining or cheering, hu?
<- I think that makes for a 'non fun' thread.
I guess I'll finish by saying, ya team, go google, wohoo, Google Rocks, party on, etc. :;
From what I can see this update (www2.google.com)has not yet taken account of the new DMOZ data.
Without this key data being included (I am assuming this is one reason for the delay in the update) in the results on www2 you cannot really see the impact.
At the moment all you really can tell is what pages are in and what are out.
Well, "serious anecdotal evidence discussion" is a bit of a mouthful. More accurate perhaps but people prefer those buzzwords. :)
Okay, I'll be serious. It's too early yet to determine anything about the new update. Making factual observation based on shifting data is the equivalent of trying to step on the same spot of water twice. Isolating variables using static data is difficult, with shifting data it is fruitless.
Making authoritative statements based on incomplete analysis is the equivalent of prevaricating. To each his own, I'm going to spend my time matching snowflake patterns.
From what I can see this update (www2.google.com) has not yet taken account of the new DMOZ data.
I have one site that I anticipated would appear in the Google Web Directory (taken from DMOZ) for the first time with this update. It does on www2 but the directory link does not go to a newer version of the directory itself and the site is not listed there. So my conclusion is the opposite - that www2 has taken it into account but that the new Google Web Directory has yet to be updated.
It's too early yet to determine anything about the new update.
don't know if others can verify, but this experience is based on three sites in very different topic areas.
i realized such a change maybe half a year ago. quite the same. one year ago, these older pages were nearly 90% of the grand total serps for a keyword, then half a year ago 60%, now it's about 30% and put to the back much more. maybe this is related to the algo described in the new patent [webmasterworld.com]. i don't think it's 100% implemented but they try to do something like it right now maybe.
indexing of dynamic websites
php indexing, or indexing of dynamic pages, i can state no change. well, for my very new sites i switched over to se friendly urls completly. so for the 'new in the index' sites i can make no decisision, this is only based on older ones.
imho what spam is, it's reduced a lot. my example comes up for book titles. spamming was there for peoples buisnesses and reseller websites a lot in the last time. this has been solved. i get much more topic related info then a direct link into a shop. well done!
i hope google will go even more into this and internationalize the froogle section. this might help to divide selling interests from information interests and will provide a better websearch.
Important algo factors for me include:
1. Quality inbound links (theme and non theme)
2. Anchor text in those links
3. Good meta title and description
4. H1 and H2 tags play a big part
5. I mention my keyword approx 7-10 times per 100 words of text.
6. Absolutely no flash on front page or any other pages. Nothing kills you faster than flash.
7. Clean code, no useless code, junk code.
8. I have a cardinal rule against putting any image on the front page and it is probably the most significant factor for me. Nothing "weighs" you down more than images. I understand for a lot people this is not possible but in my opinion - I am glad my competitors love to use 27 images on the front page.
9. Related to that: Download in 3-4 seconds max.
10. Add fresh content every month. Write one to two more pages if that is all you can do.
Summary: With those algo principles and a Keep It Simple Stupid approach to web-design - I kick butt. Working with medical sites as I do, I am glad that all my competitors who are clinics and doctors are vain with self-inflated egos and love to have big fancy image laden sites with flash on the front page and other bells and whistles because it just kills them in the serps. They would need massive links and high PR to overcome those type designs which work against them.