JD_Toims - 3:51 am on Aug 24, 2013 (gmt 0) [edited by: JD_Toims at 3:55 am (utc) on Aug 24, 2013]
To the best of my knowledge, using -f in the .htaccess can invoke hundreds of extra lines of code and forces the server to "walk the file path + scan the disk" to see if a file exists, so it's likely slower than just rewriting to a PHP file that does a "single check" for the file, even though it may seem like an extra step.
File-exists and directory-exists checks are inefficient. In most cases, they will invoke several hundreds of lines of (machine) code at the OS file-handler level, and in some cases --especially on heavily-shared virtual servers-- the filesystem and ACLs may be only partially-cached due to excessive swapping. In that case, the OS is going to have to actually go read the physical disk, and compared to *any* code execution, that is going to be *very* slow.
jdMorgan - 3rd post - This thread: [webmasterworld.com...]
The site I did what I outlined on was having speed issues after a hosting change and we couldn't figure out why, but it slowed to a crawl, so I switched everything to a system similar to my preceding posts and it was back to loading so fast if you weren't paying attention you couldn't tell the page changed sometimes, especially when the page was cached.
Note: I also built in an "auto update" with a filemtime() check and if it had been more than a week since the file was updated it was regenerated. Yes, this might make generation/serving time slower for one user, but only one user would have any type of slowdown from it, so I'm fine with it -- My dynamic XML sitemap files are coded in a very similar way to keep them up-to-date -- If it's been over 7 days since they were modified, they're regenerated automatically.
[edited by: JD_Toims at 3:55 am (utc) on Aug 24, 2013]