Welcome to WebmasterWorld Guest from 22.214.171.124
Forum Moderators: incrediBILL
joined:Oct 19, 2002
Once your page is DL'ed on someones hard-disk they can perty much do what they want with it.
Saying something is possible but too complicated to explain is not really helpful.
So far I have been with txbakers in that only what is downloaded can be rendered. What is downloaded can be saved simply by copying it from the browser´s cache or by talking to the server directly.
AFAIK, there are no articles that explains how to do this. So many people think it's impossible it's made me think there might be a market for this type of information. I will however sticky you an example URL if you are interested and maybe you can extract something useful from it.
One thing I've learned over the years is that just because somebody says something is impossible doesn't necessarily make it so. That piece of advice should be about as useful as the comment "There is no way to stop that".
If someone made their own HTTP program that merely saved all info it received, would your mysterious technqiue stop that too?
Yes, it could be done. With a good spider trap the downloader wouldn't get that far though.
Unfortunately, I do believe it is impossible to completely prevent anyone from possibly downloading and saving your content, if they are smart enough to work through it. As was said above, to browse the content, one has to provide a way for it to be downloaded in the first place. You can do some things such as offering encrpyted content that is decrypted through some java applet or something, but even then, it isn't impossible to access and save the content, just another layer of difficulty.
Once any page is downlaoded to someones hard disk, (as happens every time they view it), your copy and images and code are still copyright to you.
Having said that, publishing something publically on the Web means you make it available for any personal use to those who download it.
I would appreciate that sticky also
send client info to a server/script and returns info
back to the client as JS variables.
Depending on how or where the script is requested
from, depends on what info the client receives, if any.
Adding a URL to my data base will allow another site
to use the page/script.
But carefully crafting a bot to retrieve a site is still possible and successful.
Indeed. If you're after a particular site, it's not even terribly hard, since the routines to get around the protections don't have to be sophisticated. I've done it before when after the content of a site hosted by a large commercial entity with lots of roadblocks in place. (The desirable content had all been written by either myself or one of a small group of friends, without relinquishing our copyright.)
What good is having the ability to download the page if the rules for it's proper display were changed the moment it was downloaded?
I'd appreciate a copy of that coding via SM too, if you don't mind.
I know several folks who'd be more interested in having websites done for them if they had better protection available to prevent easy downloads of their imagery.
Be kind of hard to post about 400k of server code :)
(Dont have any html/JS code, all server scripts.)
But will snip a little webpage JS stuff:
By sending the JS 'document.URL' the server script checks a database for an
The server returns JS variables... and the lines:
if(window!= top) top.location.href = location.href;
if(document.URL.indexOf('http://www.#####.###/~georgegg/') == -1)
Which checks if the sent document.URL is really the calling JS webpage/site.
If not, redirect to 'my' site.....
You can send any 'client side' JS variable to the server this way....
like the JS 'document.referrer'
Back in the late 90's used something like this to prevent people from copying
my MIDI pages and linking to the files.
I'm sure everyone knows that the web became popular at least in part because of the view source aspect.
I doubt sincerely that most surfers even know what view source is much less what it means. Easy access to information and pure greed spurred Internet growth.
People hide their code because of those lazy thief’s out there who want to copy your stuff. I know, I've had it happen to me several times. A lot of people (myself included) won't post source code or media to their sites because of this concern.
If you use (i)frames, it will still work, but you'll have to put it in the frame-html (not frameset-page)
Why exactly are you so concerned with hiding your HTML source anyway? If you're using insecure hidden values in HTML forms then hiding the source isn't going to stop anyone, they can easily see what their browser is posting. If you're worried about someone copying your site and setting up something which looks the same (maybe to capture credit card numbers) then there's nothing to stop someone re-creating a site that looks very similar anyway, even if you could stop people viewing your HTML.
I am running some advanced spiders (doing different stuff). Please sticky me the example of a "non-downloadable" page or site. If I can crawl it using my spiders, I can save everything I get, on my HD, to a DB, in XML - or anywhere you'd like :)
Any page that contains html, I will spit back at you in perfect working order. Sure, you can incode stuff via js, but it can be decoded just as easy.
Sticky-mail for a URL.
<added>Oops can't find the URL and I must've installed this program in my other office (in another state). Sorry</added>