Brett_Tabke - 12:07 pm on Jul 9, 2012 (gmt 0)
...answered before reading the replies.
In general, there is huge overhead associated with each call to disk. Such as; 10 files of 100k each will load signficantly slower than 1 file of 1meg each. However, it is going to be the number of hits you make to those files that is going to be the determining factor. (WebmasterWorld is all flat files - 1 per thread - about 500k flat files on here now - about 8000% faster than a *sql db.)
If you are not maxing out the ram on your machine, then 1 moderatly file load is going to be much faster than dozens of smaller files. I think it is going to depend on your operating system and it's disk caching methods. If you open those smaller files very often, then they are going to be cached at a higher rate than one larger file and the overhead file open hit will be small. A meg file is nothing.
The question I have is how many hits per hour are you talking to this db? Is this a case of 100k hits, or 100 hits? If it is the former, then you probably should consider using some type of ram disk. Ram is so cheap these days for servers, that rethinking ram disks is starting to be popular again. I'm toying with putting ALL of WebmasterWorld in a ramdisk (it is less than 4gig).