I was just wondering... there are quite a few news aggregator sites around these days that scrape headlines off sites and regurgitate them as RSS feeds for rent.
Thing is - it isn't hard to write code to do the same thing for your own site (i.e. to aggregate your own news). But how does that tie in with copyright issues, or is it still very grey?
For instance - if I were to write my own spider that checks headlines / bodies on a few sites every day for keywords, then adds those headlines (as links to the original sites/articles of course) to the originating site; is that 'different' to if I'd read 20 sites myself and posted a link to 'an interesting article I found' manually?
It depends on the site. Generally, if they just stick a link to their RSS feed on their site, they *want* you to use it. After all, it's another way to generate traffic to the news site.
Other places, like the Associated Press, allow you to use their RSS feeds for free if you're non-commercial, but require you to license the feed content if you're a commercial entity. However, this is spelled out on the RSS links page, and generally is any time there's a restriction on its use. Just keep an eye out for restrictions, but otherwise you're ok.