Welcome to WebmasterWorld Guest from 220.127.116.11
Forum Moderators: mademetop
1. For the listing pages, we need to include an ID number (#*$!XX) for architecture purposes. Here is what we're torn between:
I'm leaning towards the 2nd option. I don't like using parameters in the url and don't want to associate the page another layer off the root if i can avoid it.
2. There are multiple detail pages within this listing. So our options are:
Which do you think would be better?
3. On the category page, the URL will be:
But there is unavoidable pagination involved, so we're not sure if we should do either of the following:
Suggestions greatly appreciated - thanks!
don't want to associate the page another layer off the root
I would prefer example.com/category/city-state/2 (to avoid the ?)
Just make sure that page 2 is inter-linked, same as page 1.
No file extension, short and sweet. Eliminate all erroneous identifiers from the string. Be sure the string is consistent also. If you move to lower levels in the taxonomy, the string will most likely get appended like a breadcrumb.
It looks like you may end up with some long URIs based on your original post. I'd be focusing on trimming those back as much as you can. Shorter URIs are much easier to work with no matter how you view it. :)
Any concern that putting the ID as a folder pulls it another level off the root? Will that have an adverse impact on rankings?
Not really unless of course click path is affected. It is not the directory depth that one needs to be concerned with but "how many clicks" it takes to get to the final destination. Shorter click paths equal better performing pages in most instances.
example.com/something/somthing is the same level as example.com/something.html
No. The first one is at the first sub directory level. Your second example is at the root. They are not the same. But, this and this "usually" are...
What you see above is referred to as Content Negotiation where extensions are removed from the URI strings. There is no need for those to be visible or indexed.
It gets pretty tricky when working with this type of structure and you have to be very strict in your naming conventions for scalability. You also need to make sure that all permutations of the URI return the proper server headers. I usually walk backwards through the URI (hack it) to see what gets returned in the headers...
So it appears that you are setting up a directory that is global in nature and you are doing a regional type taxonomy? After dabbling in that space for many years, I've learned some things. And with the search engines becoming as smart as they have, I've changed my strategies in some areas. For example, I don't want long keyword laden URIs anymore. I "know" that Google can determine that California and CA are one in the same given the taxonomy of the website. I also "know" that Google can determine that US is the United States.
Knowing the above, I might look at a URI structure like this...
I see you are trying to get the company name in the URI? That is going to cause scalability issues moving forward. You are surely going to have two companies with the same name. And yes, I see that you are blending other parameters with the company name to negate this. I wouldn't do it that way. I'd give each company a unique ID and use that moving forward. That way there is "never" a chance for duplication and it scales nicely.
You may also find yourself receiving requests to remove company names from URI strings. I've been faced with that in some instances. It is a branding issue for them and if you start appearing in top results for company name searches, you have to tread lightly. Removing the company name reference from the URI protects your bases in this area. You have plenty of other ways to target company name searches that are more subtle than stuffing the URI. Also note that some company names are very long. That is only going to add to the unmanageability of the URIs moving forward.
Now, if you are dealing with US only, I'd go this route...
And, I'd be sure that I was serving a Table of Contents (Sitemap) at each sub directory level that was marked as noindex. I just want the bot to follow those TOC links so that it can traverse deeper into the taxonomy.
There's a bit more to this but I think you can get a feel for where I'm going with it. Be sure to have a master Sitemap that links to all the sub level Sitemaps. Keep everything connected in a logical sequence. Take control of the bots and provide them with direction. Don't just let them come in and start bouncing all over the place. That would not be an optimal indexing.