dusky - 6:33 pm on May 7, 2010 (gmt 0) [edited by: tedster at 7:00 pm (utc) on May 7, 2010]
dusky: well laid out! thx... i also noticed that keyword in url is currently very strong for ranking, do you see that, too?
(Slightly off topic, but related to problems that can be caused by the practice and "Google SEO News and Discussion" parent forum).
I don't have enough data to comment on that yet. While we are at that, there are two camps, one for and one against the practice, many reported over the years a healthy advantage on SERPs, but many also reported problems and I am one of the later camp. It seems as I see it (and I had a large site built that way for three years), unless you find an absolute good way of eradicating duplicate content and its possibility, ranking will in time deteriorate. One of the problems I faced was implementing rewrite rules and somehow leaving holes for either competitors, spammers and even SE bots to fill in yoursite.com/[any-keyword-or-phrase-here-]$theright-id.html what's inside the brackets providing the ID is correct and the URL will still resolve as a 200 header found, and that is duplicate content.
Incidentally, when I replied to Brett_Tabke here [webmasterworld.com...] GBot was hammering one of our large sites which had keyword-in-the-url-12345.html structure but reverted to /product-1234.html, gbot was testing for bogus URLs and whether or not a 404 error is returned when trying to fetch a non-existent URL, of course it found one (even when we reverted to the simple structure) which I spotted just in time and corrected, otherwise. millions of phantom URLs would've been served as 200 header found.
Problems like that are usually associated with dynamic CMS / forum software... URLs, as most want to shorten them or add keyword advantage and readability and end up inadvertently causing duplicate content and /or ddos attacks. Note that this problem is only present in certain CMSes and open source as well as commercial software mainly URLs pulled from the database.
I know I probably opened a can of worms for many, but a rewrite rule usually fixes the problem using a 301 redirect to the only one intended URL. If you are using rewrite rules to shorten or make URLs as static HTML pages, there are pitfalls you may not know about, I didn't before, especially using certain CMSes and OS / commercial forum software.
To see if you have that problem, try and append something like ?q=something to one of your URLs like this: yoursite.com/your-keyword-widget-2345.html?q=something or yoursite.com/widgets-2345.html?q=something, the URL resolves to the same page, i.e you are going to the same page instead of 404 error, imagine someone sending a bot to hit yoursite.com/widgets-2345.html?q=dictionary-word and that's why Gbot was testing, caught it red handed, but in a way, it helped me spot the problem, thanks again G*....to put it in a different perspective, if your URL can be dynamically changed to another URL which does not reflect the actual keyword you programmed for your URL and still resolves as a 200 header found, you've got a problem.
My troubles started back mid February. Anyone similar or before this?
For the majority it seems started on or after mid February, read the posts way back and you'll see more drops like this since that date. The question or answer here may mislead people, G* seems to be moving data in chunks, hence some sites suffered drops on or after Feb as they were part of that particular chunk, sites that are un-affected I believe are going to suffer the same fate, FOR THE BEST I believe mostly if they are white hat and have good content.
[edited by: tedster at 7:00 pm (utc) on May 7, 2010]