Welcome to WebmasterWorld Guest from 126.96.36.199
Forum Moderators: bakedjake
joined:Apr 13, 2001
"Over the past year, Northern Light has seen booming demand for search, classification, taxonomy, and content solutions from our enterprise customersThat's were the money is.
reminds me of this thread:
Users spend £1bn on internet searches [webmasterworld.com]
The beginning of the end for free websearch:
Ads for Joe User - quality search for deep pockets?
Free websearch gave NL some brand visibility for accessing highly targeted, scientific and commercial databases, which many are willing to pay for. Its nothing new, as i suggested by citing Lexis-Nexus and there are more - these scientific databases have their own Web and electronic distribution channels of course but NL continues to make a pitch for being able to access quality targeted content from a broad range of these databases.
The free web search was basically a Web branding strategy, that was tangential to NL's core business (or targeted core business). With current trends, my feeling is that the strategy of being a public web SE to promote special collections has outlived its usefulness.
joined:Apr 13, 2001
"Northern Light will continue to maintain and update its index of more than 350 million Web pages to provide enterprise customers with search of the Web using Northern Light's patented classification technology, and will continue offering custom Web searching for enterprise customers."
Now let me get this straight. Northern Lights will continue to crawl my sites, and the CIA gets access to their SERPs from advanced algorithms, but the public doesn't?
What's the robots.txt user-agent for Northern Lights?
This is the first step in the right direction.
For them I think this is a step in the right direction but only in the very short term, it can only help enhance profitability. However "What's the robots.txt user-agent for Northern Lights?" long term they may lose out, if they think they are going to use my content for free and give [almost] nothing in return then they need to think again. Google's subscription service will enrode their market anyhow, just a matter of time.
>Tell me more. I like to track these things.
I know it's only Jan but that could well be 2002's understatement of the year :)
Excellent point. I've never been keen on the argument that SEs owe me for using my site in their SERP, but when they go private they relegate themselves to the 'harvester' category.
I'd like to see a lot more sights adopt the "Special Collection" model where you pay for content with a micropayment. It helps to index more of the invisible web and that's necessairly A Good Thing.
As prowsej points out, a spider can identify itself as a browser. It could also act like a user, pulling all graphics files from a page, pause between page requests, follow user-type browsing patterns, etc. It could even hit your site from a variety of quite different IP addresses to further mimic user behavior (it's a rare human that follows every link and looks at every page!). However, it can't erase its tracks in your logs. I doubt if NL or anyone else would go to all this trouble, with the possible exception of making the user agent look like a browser and changing IP addresses to defeat IP-based exclusions or cloaking.
we've just done a batch of research on a number of subjects and nearly all of it was done with either Northern Light or Google...it's the only SE that produces results that are both accurate and different from Google
even if I don't pay to use it I'm keen to stay listed...I'd expect any site that wants to be found by academics and journalists to want to stay in their directory...and I am a BIG fan of our site being found by academics and journalists