Welcome to WebmasterWorld Guest from 188.8.131.52
Forum Moderators: not2easy
Thing is - it isn't hard to write code to do the same thing for your own site (i.e. to aggregate your own news). But how does that tie in with copyright issues, or is it still very grey?
For instance - if I were to write my own spider that checks headlines / bodies on a few sites every day for keywords, then adds those headlines (as links to the original sites/articles of course) to the originating site; is that 'different' to if I'd read 20 sites myself and posted a link to 'an interesting article I found' manually?
Whatever method you use to "take" information from another site it is protected by their copyright. Bear in mind the server load, too, otherwise your site may end up being classed as a sponger.
The other site has done the work to create and assemble the material so they would take a dim view of someone sponging.
If you intend to do as you mentioned, you will need to contact the site owner for permission to do so, especially if the plan is to do that automatically.
Other places, like the Associated Press, allow you to use their RSS feeds for free if you're non-commercial, but require you to license the feed content if you're a commercial entity. However, this is spelled out on the RSS links page, and generally is any time there's a restriction on its use. Just keep an eye out for restrictions, but otherwise you're ok.