I've been reading with interest some posts about the XML feed to major engines Inktomi, AltaVista, Teoma/Ask and FAST/Lycos that is administered by various providers (I'll not post company names).
In this post:
Makemetop (a senior member) mentions that you need the following for a "Feed:"
A feed is:
A properly constructed title
A properly constructed description
A list of about 6 keyword phrases
An 150 word description of the page content
The URL to be spidered
The URL you wish to track through
The URL you wish to have displayed
The real question from my perspective is this:
Is this feed actually reading a page at all, or is it simply taking the information provided and creating "spider food" and then forwarding the user to the page that they should see? The list of keyword phrases and the 150-word description would suggest that no real page is involved except that dynamically generated content.
On the other hand, the "URL to be spidered" suggests that a page IS actually crawled.
Anyone have experience with this?
Also, if I may describe a situation and possibly get some insight there, it would be appreciated:
We want to use the XML feed for a client that has a great number of pages. However, we currently use our own proprietary tracking application to show them what's going on with their campaign.
So what we have is:
Page A (the client's actual web page, to which the human user should always be directed.
Page B (a specific URL of a site on our server that allows us to track traffic/referrers and that automatically forwards to Page A, the client's actual page)
Page C (the actual content that normal spiders see)
Yes - you see what I'm getting at here (cloaking - but only for the purposes of tracking traffic with a reliable redirect).
Going back to the original question, do we need "Page C" at all, or just the forwarding URL? Will the company admistering the "XML" or "Trusted Feed" take care of "Page C" - i.e. - what the spiders see?
As always, I hope that made sense.
Thanks in advance for any insight you may have.