Our google sitemap is crawling everything on the server, would the robots.txt file stop it from indexing these pages?
Am i right in saying that the sitemap lets google know every page that you have on your server, and that the robots.txt file will tell it which out of those it has found we do not want it to index?
Do the two files work alongside eachother?
We have different directories on the server that we need to keep on there but we do not want included in googles index. They are not linked from anywhere so this was never an issue until we submitted a google sitemap, and now google has found them all.
If i add these pages to the robots.txt file will it stop google indexing them?