Forum Moderators: DixonJones
Our ads point to content that can also be accessed through regular navigation. This will bloat the visits recorded to the landing page. If I make the landing page unique by adding parameters to the URL, I don't want that page with parmeters to be indexed by search engines. If the page with parameters is indexed, and visited, it will be recorded as a hit from the ad.
Has anyone come up with a creative way to get around this problem?
This also assumes that the page that displays the ad does NOT have any other means to navigate directly to the ad's destination page.
Instead of a URL with a marker (and tabulating parameters for the destination page) consider using path analysis, particularly the WT feature called Single-Step REVERSE path analysis, and count the number of predecessors of the destination page that are the ad's display page.
If you are only posting the appended URLs with an Ad serving agency, why would they be indexed?
However, two possible solutions would be:
1) Point the ads to duplicates of the landing pages hosted within a directory that is marked disallow in the robots.txt file.
and/or
2) If appending the destination URL:
eg. landingpage.html?campaign=AdServer
why not cross check that the referrer for that user came from the adservers domain?
James
I have considered the idea of duplicate landing pages in a folder that is disallowed in robots.txt. This does mean that we would need to maintain a duplicate set of pages though.
Thanks for the input.
Am I on the right track here? Tracking real estate on the homepage is important. What is the conventional way to to do this?
I did have another solution to this. From the homepage ad, a parameter could be sent, perhaps indicating the location of that ad on the front page. That parameter, on the landing page could switch the robots tag to NOINDEX. That should keep the page out of most search indexes.
Another idea: use Reverse Path Analysis with the parameterized internal page as the starting point. You'd get a list of pages that preceded your target page, and the list would also tabulate instances where that page had no preceding page (was the start of a visit). When the preceding page is the home page, you'll have your count. However, this method will count clicks, not visits .... so if a given visit clicked on that link twice, this method would count it as two.
Either of these would allow you to not worry about whether a search engine is indexing the parameterized link (if the page is deep, you might want this link to be spidered since it may be the only way that page gets into an index).