Forum Moderators: goodroi
These include noindex, nofollow and crawl-delay in robots.txt
Noindex in robots meta tags: Supported both in the HTTP response headers and in HTML, the noindex directive is the most effective way to remove URLs from the index when crawling is allowed.
... and preparing for potential future open source releases...This is something that I hope we will see some new developments in, the new GSC does not seem to have the robots.txtchecking tools that the older GSC offered. It was a help when they complained about "blocked resources".