Forum Moderators: phranque

Message Too Old, No Replies

Load site from subdirectory depending on domain

         

danielkun

11:07 am on Aug 14, 2009 (gmt 0)

10+ Year Member



Hello,

I'd like to host 2 domains on a webserver that only allows for one primary domain. Although it's possible to register several domains pointing to the primary one.

In the future I'm planning on upgrading my account so that I can host multiple domains but for now I'd like to solve this quickly (as the website isn't finished yet) so that the domain get's registered with search-engines. (I'd like to put up a "coming soon" page)

Basically I'd like to do this:

RewriteCond %{HTTP_HOST} ^www\.example\.com [NC]
RewriteRule /subdir%{REQUEST_URI} [L]

But with the above, the URL is rewritten to my <primary domain>/subdir. How do I keep the URL look like www.example.com?

I hope someone can help.

Regards,
Daniel

jdMorgan

3:46 pm on Aug 14, 2009 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



The rule, as posted, is invalid. It is missing either the URL-path-pattern to be matched, or the destination filepath.

Also, where is this code intended to be placed -- in .htaccess or in a server config file?

Jim

danielkun

1:45 pm on Aug 15, 2009 (gmt 0)

10+ Year Member



Thank you Jim,

I assume I'm missing the URL-path-pattern to match.

I'm not really good at this and I tried reading the Apache manual, but I'm not quite sure what to match since I want to keep the REQUEST_URI as it is, and load the files from a subdirectory.

For example,

[temporarily_domain.com...]
[temporarily_domain.com...]
[temporarily_domain.com...]

will load files from below directories:

/subdir
/subdir/index.html
/subdir/some_dir/index.html

The code is to be placed in the .htaccess file of the primary domains' root directory.

Daniel

jdMorgan

2:05 pm on Aug 15, 2009 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



In that case, you want to match *any* requested URL-path, and 'copy' it into the server filepath.

In .htaccess, you will also need to check that this requested path has not *already* been re-written to the secondary-domain subdirectory, otherwise you'll get an 'infinite' rewriting loop, with each pass through the loop adding yet another copy of the secondary-domain's subdirectory path-part -- e.g. /subdir/subdir/.../subdir/filename


# Rewrite secondary domain requests to subdirectory
RewriteCond %{HTTP_HOST} ^www\.example\.com [NC]
# Prevent recursive rewriting
RewriteCond $1 !^subdir/
RewriteRule ^(.*)$ /subdir/$1 [L]

Here the ^(.*)$ pattern matches *any* requested URL-path, and "copies" its value into $1 for use by the preceding RewriteCond and in the rewritten filepath. Note that RewriteConds are processed only *after* the RewriteRule pattern matches (See Apache mod_rewrite documentation for details).

Jim

danielkun

3:53 pm on Aug 15, 2009 (gmt 0)

10+ Year Member



Jim,

Thank you!
I had written the same code except for I completely forgot about the rewriting loop that occured.

I'm just curious, where did you learn this stuff and what book would you recommend to read?

Daniel

jdMorgan

5:21 pm on Aug 15, 2009 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I learned it by doing.

My usual reply when someone calls me an 'expert' is, "Yes, I've made far more mistakes than you have..." :)

I recommend taking the code above (and other code that you may find here) and taking it apart (character-by character when necessary) while referring to the Apache documentation and a regular-expressions guide, and doing that until you completely understand the code.

Then back off to a "higher level" and try to understand what that code will do when executed in your server environment. We often get requests here for functions that *can* be implemented using mod_rewrite, but that are really bad ideas after considering the "bigger picture." Rewriterules affect search engines as well as browsers, so sometimes a rule intended to do something simple in a browser can have very bad effects on the search engines. And if the rule does something bad in the search engines' view, then the ranking of the URLs that the rule affects may be destroyed.

So, the first four questions to ask are:

  1. What function is this code intended to do?
  2. What URLs does it affect?
  3. What effect will it have on a browser?
  4. What effect will it have on search engines and on the ranking of these URLs?
Mod_rewrite is much harder to read than it is to write because it is a small but very powerful set of directives.

Similarly, regular-expressions are a relatively small set of "functions" that allow matching text strings with extreme precision or with wide latitude -- as needed. When combined, the result is a very powerful tool, but one whose code is --at first-- very hard to read. Because of this, most people have a lot of trouble initially. But after awhile, a threshold of experience and familiarity is crossed, and then my statement above about the code being easier to write than to read becomes quite obvious: It is simply easier to 'construct' complex rules step-by-step when you know the goal than to read someone else's big pile of code when you don't fully understand their goal(s).

For this reason, I highly recommend documenting each line of code using the most precise and accurate description possible. This especially helps if you have to go back to code you wrote five years ago.

There's a book called "Mastering mod_rewrite" (or some such name) that I've often seen recommended by others. But the only documentation I've ever spent much time with is the Apache documentation and tutorial. I don't mean to dissuade you from buying and reading a book, though -- I'm just a "hands-on/learn-by-doing" kind of person, and since I'd been "programming" and designing computers (hardware/logic) for more than 35 years, mod_rewrite and regular expressions were 'just another language" similar to many others I'd already used; If you can ride one bicycle, you can pretty much ride any bicycle.

Jim