I have a website with several category pages, which are in fact HTML sitemaps to facilitate navigation. There are no duplicate content issue among these pages.
However, these pages do contain several links (sometime more than 100) to a single target page, with a different URL parameter value and anchor text for each link.
The target page displays different information for each parameter value, but since it is lean, I have made it canonical to avoid potential thin/near duplicate content issues.
I have some issue indexing that site at Google and I am wondering whether my HTML sitemap/category pages are to blame?
Could it be that Google interprets these pages as an attempt to manipulate bots (by pointing excessively to the same page with too many keywords)?
It is not my intention to do so, but since there is no feedback from Google, I am trying to understand what is really happening.
I am considering setting 'noindex,follow' on these pages? What do you guys think?