Forum Moderators: Robert Charlton & goodroi
TITLE: [company name] - [product name]
DESCRIPTION: [product name plural] on sale today with us for the low price of [price]. Get everything you need for [product category] here at [company name].
Dynamic insertion isn't just for AdWords you know. I find that if people see a price in the meta description then it's just as good as being in Google Product Search. Better maybe even since your listing is typically surrounded by listings that do not show a price.
And of course if the price of that product goes up and G doesn't index it in time for the click to happen, you can always lay the blame on Google. I mean hey, YOUR current description tag is accurate, right? :)
I'd be careful in this instance to avoid "back to back" duplication. It "looks" a bit unusual at times and may be a signal if other things are present. That type of "back to back" replication may go against how those elements should work in unison and in a sequence.
I agree. Using plural, phrase switch (i.e.- blue widgets vs. widgets in blue) and relevant variants is usually a better way to construct the description tag.
Weren't we all just in here a month or two ago talking about how NO description tag might be better for click throughs since the snippet Google takes often automatically includes the words people are searching for? Not that I ever agreed with that, but what about that whole discussion?
-Doc
Thanks,
Bilal
although meta descriptions themselves dont directly increase rankings, my experience is that if u get a higher CTR, the google algo takes this into account and then it helps your rankings...
Don't ever give up hope! What I try to do is take "all of my variables" and lay them out on the page. From there I'll start writing around them. I need to see them flow naturally with what is written. The goal here is to use as many variables as you can. The more variables, the more unique it becomes.
If you are an e-commerce site with competitive pricing, then using dynamically generated title and description tags can be a great boon. For example:TITLE: [company name] - [product name]
DESCRIPTION: [product name plural] on sale today with us for the low price of [price]. Get everything you need for [product category] here at [company name].
I've tried pricing in the description but I'm always concerned about pricing changes and G being slow to index the new price, so I no longer do that. I'm very picky about accurate pricing, and don't really want to put a disclaimer on the pages saying "price discrepancies may be due to G not updating our site as fast as we'd like them to". ;)
I've tried pricing in the description but I'm always concerned about pricing changes and G being slow to index the new price, so I no longer do that.
Very few "consumers" look at or rely on that cache. There is one way to nix that one...
<meta name="robots" content="noarchive"> In using that over the past year with a particular website, I've noticed some added benefits. For one, there ain't no cache there anymore for someone to go rummaging through. Also, there appears to be some correlation between using it, the noarchive, and a somewhat "forced" look at the meta description element. With the noarchive and a nice clean meta description, I believe that is the best spidering option for the bots.
I'm very picky about accurate pricing, and don't really want to put a disclaimer on the pages saying "price discrepancies may be due to G not updating our site as fast as we'd like them to".
I sure hope people are not Shopping using Google Cache, I really don't! ;)
<meta name="robots" content="noarchive">
I sure hope people are not Shopping using Google Cache, I really don't! ;)
However, I had a bug about 6 months ago caused a few thousands dupe Meta's, can you say supplemental?
Ouch.
Most of my meta descriptions are automatically generated as the site is dynamic with oodles of pages, far more than any human could manage by hand. Each meta is created by a formula based on the particular content and so far Google seems to be happy as a clam with those meta's.
The search engines love that stuff. And the more the merrier. I mean, the more unique content per page, the merrier. Smart Programmer + Smart db = SEO to the nth degree. ;)
^ Seriously. I tell many consulting clients that they do not need to hire an SEO these days. If they have a dynamic site, the Programmers are the SEOs. They just need to follow the basic rules of engagement. ;)
However, I had a bug about 6 months ago caused a few thousands dupe Meta's, can you say supplemental?
I respect that you would share your failures with us. I too have had bugs and boy oh boy, you learn quickly when they occur. Those are usually the "happen once, never again" type bugs. But, every now and then, we tend to overlook other things with this myriad of technical challenges we are faced with. SEO is Brain Surgery. I'm still in Residency. ;)
i checked the positions of some of the keywords and there was no change in them - the only change was the meta description that was showing
...it makes a big, big difference!
Also, i mentioned earlier that CTR probably influences the google algo in some way - in the same way, the bounce rate (ie: user clicks on search result and return to SERP within x seconds) probably has some influence too on rankings (no hard proof though - just somthing that seems logical to me!)
However, I had a bug about 6 months ago caused a few thousands dupe Meta's, can you say supplemental?
Can you say a bunch of duplicate titles and supplemental? :(
Can't find it, the programmer can't find it, and he is the only one who makes this particular app.
You all are making me rethink the pricing in the description again...
Can you say a bunch of duplicate titles and supplemental?
That's where the multiple variables come into play. If I were to sit down with a db developer right now, I'd have a different perspective on how to approach various elements. For example, many will use one variable and pull from a "Summary" field in the db. In your instance, if that fails, you are left with nothing. But, Google would then revert to an on page snippet so you should be safe. I have to do some checking, but I don't think an "empty" meta description would be a problem. I might be wrong. And when I say empty, I do mean content=""
I have some meta descriptions that are assembled. And in that case, the more variables the better. So, if any one of them fail, at least there are some backups to pick up some of the slack. But that would be a nasty bug to have, it really would. I'd be calling in a specialist to check that one. ;)
But, Google would then revert to an on page snippet so you should be safe. I have to do some checking, but I don't think an "empty" meta description would be a problem. I might be wrong. And when I say empty, I do mean content=""
I've had a few people look at it and try to help, nothing to date solves it, and it is just so random. But with all the complexities involved, it could be one of many places. It's not just a straight db lookup unfortunately.
Of course there is, which is to avoid a high percentage of supplemental pages. A high percentage of supplementals will get your domain disrespected and lose you anchor text benefit for starters. The only place it is safe to have no description is on that is noindexed.
I have to do some checking, but I don't think an "empty" meta description would be a problem. I might be wrong. And when I say empty, I do mean content=""
pageone - This is gut instinct only, but I've always thought this might be worse than no having no meta description. I could argue it either way, so if you do that checking, I'd be curious what you find.
Or how about a programmed step that says > if the title element variable is empty, then use the first eight words from the body copy?
That's another very good thought. Thanks for all the suggestions. They are greatly appreciated. I'm looking into them as we speak.
But, this topic is really about "short meta tags", and I don't want to hijack or derail it.
I have to do some checking, but I don't think an "empty" meta description would be a problem. I might be wrong. And when I say empty, I do mean content=""pageone - This is gut instinct only, but I've always thought this might be worse than no having no meta description. I could argue it either way, so if you do that checking, I'd be curious what you find.
I would also, however I've finally found ways to get my meta descriptions under control with what I have to work with.
Or how about a programmed step that says > if the title element variable is empty, then use the first eight words from the body copy?
I really hate to disagree with Tedster because he is an incredible source of some of my best advice, but I'm not sure this would be the best way to avoid such a bug. If all of a sudden your title tags go from being thematically organized to a mish-mash of potentially non-relevant terms, then you could potentially lose any (or all) keyword ranking history for said pages, or at least throw it way off.
I would suggest that in the case of an empty title tag, you write a script that analyzes the last letter of the page name (minus the extension) and assign a series of 5 or 6 arrays with all the letters of the alphabet split between them, then assign titles to those arrays that are somewhat generic but still keyword rich and structured the way the rest of your title tags are.
Then when the script reads the last letter of the page name, it will compare it to the arrays and pull one of your predefined generic title tags. This way you'll still have some variety of title tags to avoid duplicate content penalties but it will also still have some structure and consistency.
You could also extend this script to the description tags but use a different static page factor to determine those, that way you won't necessarily get the same description tag with the same title tag every time.
Again, this is just my personal opinion on what I would do in this situation, and not a direct criticism of Tedster's advice :)
for some websites and their CMS, that approach might work out
Good point, as it does vary quite a lot depending on site structure and topic. I am assuming this would also have to require the entirety of the page content being pulled from the CMS, correct? Since the data itself would have to present before the page metas were constructed. In the same vein of thinking this approach would be even more effective if the title tag defaulted to whatever was in the first <h> tag, or in other words some kind of page topic variable. Again I guess it's based best on how the variables are pulled from the database.
the biggest variable would be how often this empty title variable actually occurs
Definitely. My sites (being that 95% of the page titles are generated dynamically) have a default title and description tag based on page name if there aren't enough variables present to construct it normally, and then another site-wide default title and description if the page one fails.
Can't be too careful sometimes :)
Also, someone asked, this seems to happen on about 5% of the pages at any given time, yet there is no correlation to which pages it happens on when, it's totally random. Oh, except we know for a fact it's only product pages since that's the only place this variable is used.
I have attempted to make "small" changes, in hope of isolating the problems. One of those areas that I have tested are the WMT "Pages with short meta descriptions".
I had about 70 pages that were categorized in this bucket. In most cases, I added several insignificant prepositions, to bring the meta description word count to about 10, and lo and behold, they were removed from this list.
HOWEVER, after these pages got reindexed by Google, I tumbled even further in the SERPs, without recovery again. Most (not all) of these pages hadn't been updated in about 1.5 years, and many of them, but not all, had positive PR. Many of these pages had not been updated since December, 2006.
I am now in the process, of reverting my changes back for a subset of the above 70 pages, to the December, 2006 state.
Some questions that I am trying to answer from all of this:
1) Did changing these pages from their December, 2006 state,
cause Google to re-evaluate them differently, than if they had
remained status quo, and no different from their cached
version ?
2) Will reinstating the pages to their December, 2006 state,
revert the state of the world back to that time, OR will
these pages be treated differently (more harshly), since I am
moving from a non-violated WMT "good meta description" to a
violated "short meta description" ?
3) Am I attempting to measure these changes while Google has been
moving its target all the while, so in effect, I may be wasting
my time trying to analyze all of this stuff ?
4) Should I have listened to my parents years ago, and become a
doctor ?
<meta name="robots" content="noarchive">In using that over the past year with a particular website, I've noticed some added benefits. For one, there ain't no cache there anymore for someone to go rummaging through. Also, there appears to be some correlation between using it, the noarchive, and a somewhat "forced" look at the meta description element. With the noarchive and a nice clean meta description, I believe that is the best spidering option for the bots.
Can you elaborate on "...and a somewhat 'forced' look at the meta description element" ? Do you mean that google is more likely to update your site description with this tag? Conversely, does it mean that the googlebot DOESN'T UPDATE this kind of information as often as "actual" new content, elsewhere on the page?
Thank you for a very thought-provoking thread.
Barry
While I'm thinking of technical troubles with the meta description elements - keep html markup out of that content attribute!
I prefer the HTML Validator extension for Mozilla Firefox as that requires no work at all to see if a page is valid or not.
I keep an eye open for the yellow exclamation mark or the red cross and only then do I click it to see the error list.