Welcome to WebmasterWorld Guest from 188.8.131.52
I use a free utility called GSiteCrawler which is one of the tools listed in the Google FAQ (above link).
Basically the program crawls your site and generates a file called sitemap.xml. You then upload this to your root directory. Then you sign up to Google Sitemaps and a spider will visit your site and retrieve the sitemap.
We have a content rich site that needs crawling frequently - content changes several times a day - but have only a shared server so don't have access to the root to install/run Python and associated scripts.
Any advice, other than change my useless host, that would allow us to generate the map and then not have to manually update or upload it?
The site is in my profile, by the way.
Should probably also have mentioned that our site doesn't run a database, which makes dynamic XML site mapping something like difficult. Starting to think we should build a database - it seems rather easier than all this, and would let us do RSS too.
Just need someone to key in thousands of pages of content... ;-)
But all this surely still depends on having root access to the web server, doesn't it, to install programs? This we haven't got.
So I need a database and a new ISP...!
That is the only thing which changed in that time frame.
The new sitemap was submitted. Googlebot grabbed the sitemap, crawled the site, and the site was banned.
site:www.theexemplifieddomaininquestion.com show no results and searching for www.theexemplifieddomaininquestion.com says "We have no information about that domain.
Your search - link:www.theexemplifieddomaininquestion.com - did not match any documents.
Yahoo shows this:
Search Results Results 1 - 100 of about 13,600 for link:www.theexemplifieddomaininquestion.com