Welcome to WebmasterWorld Guest from 188.8.131.52
I don't know if they are affected in the long term though, as I see some old, established urls with query strings ranking well for competitive terms. It definately takes them longer to get there though.
I think this makes sense, as many sites that use dynamic urls still cannot be crawled logically, and if Google were to treat them the same they would tie up a lot of resources.
If you decide to use dynamic pages (i.e., the URL contains a "?" character), be aware that not every search engine spider crawls dynamic pages as well as static pages. It helps to keep the parameters short and the number of them few.
Don't use "&id=" as a parameter in your URLs, as we don't include these pages in our index.
From here: [google.com...]
We're able to index dynamically generated pages. However, because our web crawler could overwhelm and crash sites that serve dynamic content, we limit the number of dynamic pages we index.
Also, you can help us find your dynamic URLs by submitting them to Google Sitemaps.
From here: [google.com...]
To answer your question, Google doesn't penalize, it just choses to be more careful with dynamic urls.
you cannot make a 1000 products shopping cart just with plain html. ( hmm maybe a few people can!)
mod rewrite has errors in the long run. i avoid that, unless specifically asked for. mod-rewrite is a manipulation anyways.
i have many sites, spidred well, even with &id parameter after jagger scenerio, so i feel maybe that issue is also taken care at googleplex now..
i take 'need' as a thumbrule for any project and the decision of html or dynakic should be as per project need and not SE capability to spider them
indexed more slowly (or cautiously). Links from them are not followed as readily.
That's been my experience as well. There simply are more (what can we call them?) "safety routines" that the crawling software must run to keep a dynamic crawl out of trouble -- and this can have a very noticable effect.
you cannot make a 1000 products shopping cart just with plain html.
Well, yes you can. But even better, you can write a database driven application that generates REALLY, TRULY static pages and gives each one the file name you choose. Do you really need a data lookup for every single page request on an e-commerce site? Maybe, in some specialized cases, but not most of the time.
But, if you have the patience for googlebot to eventually crawl everything, and you don't need to make frequent content updates to existing page, then you probably will do fine -- eventually -- with regular dynamic urls.
However, here are some logical drawbacks to them:
1. When a user sees a dynamic url they are bound to believe it is not as pertinent as a well thought out HTML page. this has a few effects: a) The user is less likely to click on it from a search engine result b) users are less likely to link to it because of its "unstable" nature.
2. In coding a dynamic website you are far more prone to mixing the order of variables. This will make se's think those are two different urls. You may also then be splitting up any incoming links (half will link to it in one way, and the other in another way) which I bleieve will reduce the importance search engines give it.
I believe the "id" rule is ridiculous. ALmost all database entries to articles or products or anything end in an "id". It is an unreasonable hack.
When a user sees a dynamic url they are bound to believe it is not as pertinent as a well thought out HTML page
We are not in 1995 anymore. Most of the contents on the web is dynamically generated, maybe except personnal pages.
Someone mentionned that you can write code so that a databse driven website is displayed in tuly static pages. What's the difference? It's only technical, in order to make google "think" that your website is static. mod_rewrite can make that.
We speak of that only for the sake of search engines. There is absolutely no reason to think that a piece of information is more relevant if it comes straight from a manually-written html page instead of a database. The only concern is for search engines to be able to find the information. The guidelines that Google provide mix technical concerns with anti-spam and relevancy concerns. They give you guidelines in order to make your page clear and concise, which is in my opinion the only thing you should care about in an ideal world. And they also tell you not to use ID as a variable... what's the point? Does a variable named FOO is more relevant than ID? And does it tell that the contents as more stability?
You may argue that technical concerns are very important to be indexed in Google, and indeed they will make you indexed or not. But, doesnt Google tell you, in the same page, to build your website for users, not for search engines? This is very contradictory with some of their other advice. Because if I build only for my users, maybe search engines won't crawl my website at all, and the users who may be interested in my onformation will never find me. For example, I may build a website in flash for users who are 16 y o and want things to move. Then I think of my users. But I will never be found. And I cannot cloak either, even to help Google know what I'm talking about in the contents that it cannot see.
The point is, Google increasingly adapts to the reality, and will continue to do so. It began to index dynamic URLS with lots of variables, because these URLS display information that you cannot overlook when you want to be the reference to find information. I think that we, as webmasters, can work in order to help Google, but Google also has efforts to make in order to index information that users want to find. And it cannot judge wether this or that type of page is interesting or not, only users can. If users are interested in pages that come from a database, Google will fall if it doesnt permit you to find them. That's why it does permit you to find them.
Google don't make us a favor when they index our pages. It's their job, and they make money out of it. We, as the internet community, give them a lucrative present by making them the nevralgic center of the web. We can therefore have hope that they will increasingly adapt to the reality of 2006 : Flash contents, dynamic websites, etc.