Welcome to WebmasterWorld Guest from 22.214.171.124
Forum Moderators: bakedjake
1) Hard Drive issues
Many thousands of files stored in many hundreds of locations and all being called by the same request will cause the hdd to work harder (platters spin more, reverse direction more and the heads will shuttle more frequently as they move around the platters). This would be a tiny performance hit due to the physics of the hard drive activity during seek-and-compile.
2) Filesystem issues
None, really, unless you're running Windows, which would need to be "defragmented" on occasion to minimize disk travelling. Cross-partition/drive requests may take a hit as mentioned in the hdd note above.
"Files" are not really "in" one place. They are clusters of data that share some common identifiers in the data packet's headers that indicate (a) which other packets are associated with each, (b) in what order they are to be re-assembled for use and (c) which "folder" (a naming convention, really) "holds" the "file", among other things.
You could have a system with one giant directory that holds every file you fill your hard drive with and there would be no performance hit ... millions or billions of files, if your drive was large enough and the file data small enough. Of course, there are many security and operational issues that make that scenario less than optimal, but which "folder" the "files" are in is not one of them.
Go nuts! :)