Using includes for page content is an excellent system, wishing you well with the modularisation!
Simply, a page is a page; whether it's created by hand in a text editor, or generated server-side with php, it's still some html to a spider, just some text.
a links page(s)/site-map that has all your links on it (split into <100 per page, google recomendations), is useful, it certainly gets my site a deep and thorough indexing from the googlebot.
coho75's right about the URL's; if your pages generate/require variables in the URL's, don't worry; all the important search engines can handle generated.php?truth=seo-myth&whatever=yaddayadda type pages just fine. remember, it's the search engines that must strive to keep up with web server technologies; not the other way around; we don't have to regress for them! Anyway, http GET is pretty standard stuff.
True, session data and multiple-parameter URL's will increase spider access problems roughly in line with the URL's complexity, but for simple URL's, with just a few parameters, there's no problem. Same goes for sessions; only if a client is refusing session cookies will the session ID be forced into the URL (as far as I know), and the bots all seem to handle sessions just fine without doing that. For sure, my "wisdom of jack handy" page forces a session on every single visit (you only get three helpings a day, you see!), yet still manages to find itself right to the top of around 39,000 Google results.
If your URL's are really long and complex, no matter, it's a piece-of-cake to get mod_rewrite to translate static-to-dynamic links. some parts of my own site are accessible by two or three different URL schemes, as I re-organised things, I create re_write rules for the old URL's, no one minds, certainly not the bots. And everyone's links work just fine. similar methods can be used to redirect old URL schemes to now, too.
For more info, google for ".htaccess" and "mod_rewrite"