whoisgregg - 3:31 pm on Feb 7, 2012 (gmt 0)
maybe any storing would be more efficient than doing it all on the fly
Yes, this is almost always true.
Some of the entries, once modified, are updated to certain status's that cannot be displayed to the end user anymore
If some portion of your data is invalidated on a regular basis, just build your caching mechanism to take that into account. For example, if you typically display the 10 nearest locations, but you expect some of those to be invalid, then cache the 20 nearest locations.
Primary key lookups on those 20 records should still be MUCH faster than calculating the distances of 10 records. If a few of them are no longer valid, you should still have enough rows to display what you need.
You can do a lot of primary key lookups with almost no hit on the CPU -- it's SELECT statements that perform math operations with the data in the table that kill the server.