Forum Moderators: open
Something I've been pondering lately: doesn't it seem that a big factor in the hack-ability of web software lies mostly in the fact that it can be downloaded and examined? I have downloaded free and open source scripts and withing five minutes see holes almost anyone would be able to poke in them. Without having that knowledge, one would have to play hit and miss, rather than know just what to do.
My question references only web applications, such as the ever-hammered mailer scripts, carts, and the like. Microsoft and Linux are bad examples because both of them involve many hands and in the case of MS none of those hands are fully aware of what the others are doing.
So by protecting your source, it's just one more thing that makes it less appealing to those that would abuse your programs. Yeah or Nay?
One of the basic ideas behind open source is the concept that "given enough eyeballs, all bugs are shallow". As a developer, you can examine the source code, and report and even patch any holes, the patch then being available to other users. So you get much better quality control than with source code with limited access.
The only hack attempt I've seen that was generic enough to succeed in a blanket-type of way was an XSS-something that tried to submit references to external javascripts into some of my forms. Yes, this was automated too [webmasterworld.com], but although doubtlessly effective on many sites, it's also very easy to protect against.
Besides generic attacks like that one, security through obscurity would seem to protect a site from everything except the comparatively unlikely event of deliberate attacks targeted at that specific site. Or at least that's how it seems to me...
I know a few people who claim 99.9% of all SSH hacking attempts against their servers fail because they're running SSH on a non-standard port. The bots look on port 22, don't find it, and off they go. Sure, there's always that 0.1% that check other ports, but still, "security through obscurity" looks like a pretty effective part of these folks' defense! ;)
targeting specific, known vulnerabilities, usually in very common software
This is not however an effect of the source code for the application being available, but rather the problem of footprints, or identifying fragments of text which are searchable via your search engine of choice.
An example: the 26,100,000 pages currently proudly powered by WordPress [google.com]. If there's a vulnerability in WP tomorrow, do you want to be one of those whose site is on the list? Or a search for "powered by phpbb 2.0.10" (a vulnerable version of phpBB) and get a list of thousands of hacked, abandoned sites (no, don't click on them, the search is unlinked for a reason).
You're correct in saying that footprints are a bad idea: nothing on your site should make it easy to identify the application you are running. However, on a closer inspection, a phpBB forum is obvious even if you switch all the file names and remove the footprints. The same goes for WordPress and many other similar scripts. So the obscurity you gain by removing footprints means that, at a cursory glance, your site is less of a target. But that obscurity is useless if there is intent to attack your site, as it is not a security measure in itself.
If PhpBB was only available as a managed hosted product, not only would it mean that the source code wouldn't have yielded up exploits, it would have meant they could forcibly upgrade all forums the moment an exploit was found and patched.
Even Google uses this model with the Google Mini and Google Search Appliance - they don't allow you to install their software on your own hardware - only to purchase locked down hardware which you can attach to your network and configure via an interface. That keeps it, effectively, a Google Hosted and Managed solution.
This is not however an effect of the source code for the application being available, but rather the problem of footprints, or identifying fragments of text which are searchable via your search engine of choice.
Granted, but most of the time these footprints belong to open-source software. I concur that this is not caused by the source code being available.
However, I have always suspected that, at the very least, open source software makes it easier for hackers to find exploits quickly. Having the code has got to take a lot of the guesswork out of hacking an application.
Would anyone care to speculate on how many more Windows exploits there might be if it was open-source? ;)
Without the source code, a hacker would either have to install the software in a sandbox of their own for testing, or work more or less blindly on other folks' installs until he found a hole. The latter would have to be fairly challenging, since on the web, even just knowing a list of files in an application is very valuable. For instance, in WordPress, the xmlrpc.php file is never visible to a visitor. A hacker would never know it existed unless he'd first gained access in some other way. And yet, who among us has not seen hundreds (or thousands) of automated requests for this file?
it is not a security measure in itself.
Just like to clarify again, I ask this not as a measure in itself or even one that you wold rely on to keep you out of trouble, that would be just plain foolish. Security for a website is no specific feature in your configuration or programming, it is a series of items you put into place and monitor. So I wonder in the context of an added layer of security.
It's a similar idea to protecting a password. A single point of entry brings the whole thing down like a house of cards.
While I clearly understand the more-eyballs idea, for every positive set of eyes looking for trouble and sharing it, you probably have just as many looking for trouble and not sharing it, and using it against the free programs. I ask because I have some applications that may be very useful, but I'm reluctant to expose them to the world because as soon as I do I open a whole maintenance issue that never existed.