Forum Moderators: Robert Charlton & goodroi
/blogs/default - My blog page (that I've submitted to some blog sites)
/robots.txt (404)
/sitemap.xml.gz
However it doesn't spider any of the links from sitemap.xml.gz or the ones off of /blogs/default.
Does this make sense for a new site? Should google be following the links off of /blogs/default?
These factors include PR and a range of links.
When I've launched a new site, I make sure of the following factors:
1) Quality content: No dupe content for sure. This is checked with Copyscape. We want to ensure that google does not hit a dupe content in its initial crawls.
2) Quality links: Both to the home page (initially more to the home page) and deep (only after its been indexed by the major engines).
Once you have the two sorted out, you'll notice that the frequency and crawl depth will increase. Eventually around a solid PR5-6 you'll notice that you're in the daily crawl -- which is where you can start experiments like tweaking titles / h1 etc.. :)