Forum Moderators: open
What I noticed is a lot do not use the exact keyword in the title, rather combination with or without pluralisation. Also I noticed a lack of <h1> and <h2> tags on many. Is it the case that repeating the page title in the <h1> or <h2> tag is considered over optimising. Seems a bit ridiculous if it is, I thought that was the purpose of <h1> tags, ie a title.
Anyway, that's what I noticed. I tried taking out my <h2> tags and my site dropped a page so maybe that's not my problem.
The page I am tracking has higher PR than most of my competitors and mor relevant incoming links (I live in the are and most sites are national).
Another thing I was wondering about was page names and directory depth - I'm noticing a lot of spammy page names with underscores doing well. These often match the page title.
As for directory structure - I see many root level pages listed rather than directories, particularly deeper directory structures.
Any comments/suggestions?
Much more likely to Google algo just doesn't give any extra benefit to page beyond a certain amount of apparent on page optimization. After that point, it comes down to offpage and offsite factors, such as PR and inbound anchor text from other sites. The problem commercial sites tend to face is legitimately getting relevant inbound links. Amateur sites tend to link rather freely with other related sites. Commercial sites tend not to want to link to the competiton.
I'm noticing a lot of spammy page names with underscores doing well.
No huge significance to underscores - just part of an overall SEO process of slapping the keyword in everywhere possible.
Filename of a page certainly could be one factor in the ranking algo, but not a hugely significant one. Although that said, a secondary benefit of keword filenames is that if anyone links to you using your URL, rather than link text, then you have the keywords there.
Underscores mean the link text is "your_keyword.htm" instead of "yourkeyword.htm".
Personally, I used underscored keyword filenames long before I found out about SEO - it was just an easier way to recongise the files when I was messing with my site! ;)
Scott
I doubt that an over optimization penalty exists
Marketing Guy - my point about KWs in the page name was it seems to get higher importance in the ranking than I'd have thought (or maybe not!)
The point is well taken that there are sites of all shapes and sizes that do fine with apparent 'over optimization'; especially sites with gross repetition of kw's. Our view is that sites with one or more of these traits typically do well by virtue of having other on-page and off-page factors that offset their 'over optimization.'
Is there an OOP? Put it this way. You *can* get a page killed in the SERP's for something as simple as too much kw density. We've tested it repeatedly. And you can get it *unkilled* - or at least you could in the past - by fixing the kw density. In it's simplest form, that's what we think of when the 'over optimization' question comes up.
What I can also say is that we've identified, via testing, a rather lengthy list of techniques that, taken to far, or made too repetitive, will sink a page. Indeed, some will sink a *site.* And this, IMHO, is now very related to what people call the Sandbox.
For our purposes, it helps greatly to view the entire mess as algo(s) + filters. You pass muster, or you don't. Unfortunately now, especially for new sites, one increasingly well known was to pass muster is to take several elements of OO to extreme. However, I'm sure that G is watching these examples, and fine tuning their 'anti-new-sites' algo tweaks as we speak.
Now granted, if G used this formula for every ranking they would get into a world of trouble, but if they used it selectively, where as certain factors may kick in LSI to different degrees, it might be an explanation as to why some sites seemingly do well with keyword stuffed pages & others rank #1 for a page practically vacant of that KW.
A main part of an informative site is a huge list in alphabetical order. For the convenience of visitors it is split up into three html files (A to L, etc). Now a SEO problem is my need to incorporate a few different, constant keywords in a table header, plus the different spellings for a major keyword. I hope Google would, in multiword searches, combine some selected keywords with the contents of any of my lists. The intention is not to get all three pages listed separately in the same SERP.
Can I try to solve the above problem by using different Meta Title Elements for each of these three files? Maybe even use synonyms for keywords there? Or is this something my site is likely to get penalized for by Google?
I could also have small variations in the text of the h1 or h2 headlines of three pages, as seen in the browser window, but that may be going too far, because normally such headlines of a split list would be identical.
I think google wants to see slow link developement and denisty's in the 10-15% range. Too many links and too much density is most likely triggering flags right now.
The funny thing is, if you have other keywords that are ranking well, those don't seem to be effected by the OOPs you might receive for ROS links on specific phrases. WIth that said, it seems you can over-optimize for one word, get a penalty, but any other words you are optimizing for might not be effected.
As to whether or not it exists, I don't know if anyone has given proof. But I am a believer in it. I lean more towards the idea of going too far with your main keyword for a page (too much identical anchor text, words in title, h1 tags, high KWD density, etc - all for a specific keyword).
Just putting the keywords in title, h1 tags, etc. is standard for web design, so it is very unlikely that including it in each of those areas would ever have a negative affect.
Do things in moderation and try to mimic what is natural on the web.