homepage Welcome to WebmasterWorld Guest from 50.19.33.5
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Pubcon Website
Home / Forums Index / Code, Content, and Presentation / Content Management
Forum Library, Charter, Moderators: ergophobe

Content Management Forum

    
Poor man's CMS architecture - am I crazy?
jeddy




msg:4370177
 5:13 pm on Oct 3, 2011 (gmt 0)

Let's say I wanted to power a number of very simple websites on different hosts & IPs off of one central database. Am I crazy to think that the following would work?

1) MySQL database on main server holds HTML markup for various pages of the "network" sites

2) Each time a page on one of the "network" sites is requested, a PHP file on the "network" site calls a PHP file on the main server (call it "displayContent.php"), passing it 1) the URL of the requesting page, and 2) a unique token stored in config.php

3) This token is not a database password - but its hash is stored in the main server's database, and displayContent.php uses the passed value, as well as the requesting page, to validate the request

4) The main server sends back the appropriate HTML content, if it's found, which is then displayed to the page on the "network" site.

A couple problems/objections I foresee.

1) This doesn't provide any of the standard CMS functions like approval, access control, etc. I'm OK with this. I just want to centralize the control of content for a small group of sites on different servers.

2) This setup is vulnerable to CSRF, since the connection used is not secure, and referrers can be spoofed. I *think* I'm OK with this. The displayContent.php script won't display error messages, and will validate input (the format and contents of the requesting page, as well as the format and contents of the token) so that it will only be capable of serving straight HTML for valid page requests. So SQL injection would not be an issue, and the most an attacker could get sent back to them is the HTML, which they could just get by spidering the network site itself, without the extra step.

3) It slows things down. Yep. It does. Caching could help with that.

Thoughts?

 

httpwebwitch




msg:4370233
 7:08 pm on Oct 3, 2011 (gmt 0)

version control is also a no-brainer - when content changes, do an INSERT instead of an UPDATE, drop in a timestamp. Then when you're selecting content, grab only the most recent row.

Approval & access control & so forth will merely be more columns in your giant content table. They can be added later, if you need them.

The gotcha will be, you guessed it, speed.

Solve all your SQL injection problems right away, so they don't bite 'ya later. see mysql_real_escape_string()

make sure your SQL table has appropriate indexes on primary keys.

I'd plan some way to leverage caching from the get-go. At least get the concept down on paper before you start, so caching CAN be added soon after the SQL guts are done.

One advantage you might not have expected, is that this kind of setup will let you request content via AJAX, and into multi-purposed apps for mobile devices & XML feeds etc.

httpwebwitch




msg:4370234
 7:10 pm on Oct 3, 2011 (gmt 0)

oh, and this kind of setup also puts all your eggs in one basket. Single point of failure. If that database craps out, the entire network is kaput.

Make it fault-tolerant, or build in some redundancy.

For instance, instead of only caching the content on the hub where the content is stored, cache it on each network spoke, with another layer of caching in the hub too.

jeddy




msg:4370238
 7:16 pm on Oct 3, 2011 (gmt 0)

Thanks!

g1smd




msg:4370257
 7:40 pm on Oct 3, 2011 (gmt 0)

If you're fetching across the web, this will likely be very slow.

The round trip time will be more than doubled.

lexipixel




msg:4371757
 4:22 am on Oct 7, 2011 (gmt 0)

If the data isn't dynamic or frequently updated I'd write a script that generates static pages, (from the central database and templates), and push the pages onto their respective sites.

If the data is regularly updated, (hourly, daily, periodically), I'd script it to push new static pages to the "fed sites" periodically.

If most of the content is static and only some of it is dynamic and needs to be fetched in realtime, I'd put the majority of the HTML on the fed sites and use Ajax to pull the dynamic parts of the page.

lexipixel




msg:4371758
 4:23 am on Oct 7, 2011 (gmt 0)

If the data isn't dynamic or frequently updated I'd write a script that generates static pages, (from the central database and templates), and push the pages onto their respective sites.

If the data is regularly updated, (hourly, daily, periodically), I'd script it to push new static pages to the "fed sites" periodically.

If most of the content is static and only some of it is dynamic and needs to be fetched in realtime, I'd put the majority of the HTML on the fed sites and use Ajax to pull the dynamic parts of the page.

tangor




msg:4371829
 9:41 am on Oct 7, 2011 (gmt 0)

If the idea is same content on various servers... I've gone with a key system which replicates (as needed) the database across all systems. This, of course, depends on the size of the database. Benefit? If key goes down the remainder works and all work at normal server speeds. Downside? Database size and push for updates. These days, however, with gigabit pipes, that's not much of a problem.

Only down side is "duplicate content" possibilities in SEs...

jeddy




msg:4371858
 11:34 am on Oct 7, 2011 (gmt 0)

Thanks guys. I think much of the content will be static, so a cron job that pushes/pulls content to each of the network sites may end up being the right way to go.

d3vrandom




msg:4373268
 3:17 pm on Oct 11, 2011 (gmt 0)

I think the bigger question is why? Why go through all this trouble when you can just deploy wordpress or drupal and use their multisite feature to power as many websites as you like off one code base?

jeddy




msg:4373273
 3:27 pm on Oct 11, 2011 (gmt 0)

As I understand it those require sharing a host, which I do not want to do in this case.

d3vrandom




msg:4373363
 7:49 pm on Oct 11, 2011 (gmt 0)

Well you don't have to share the host. I mean you could have all the sites consult a central database server. It would be slow but no different from your current proposed system. Another option is a memcache server to speed things up.

But I am curious as to why do you want to do things this way? I mean why not just use different IPs for the different sites and host them on the same server? If SEO is a factor you can get IPs in different subnets at certain providers that cater to this sort of demand.

brotherhood of LAN




msg:4373415
 10:06 pm on Oct 11, 2011 (gmt 0)

It's a nice setup for hosting lots of sites across cheap servers. Simple flatfile setups are invariably going to run well on those.

If the data isn't dynamic or frequently updated I'd write a script that generates static pages, (from the central database and templates), and push the pages onto their respective sites.


And periodically check the main server for updates, say, every 24 hours, or otherwise have an update content process.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Code, Content, and Presentation / Content Management
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About
© Webmaster World 1996-2014 all rights reserved