Hello - I have a site where the header is the same all over the site, means when I want a noindex it will be on whole site, but I want this noindex just on a part of the site, like example.php is there a way how I can do that, create a seperate head tag just for that section of the site. Im not interested in a robots.txt solution
"Two versions of the header" is the first thing that comes to mind. Do your index and no-index files have something in common, like location, or are they randomly mixed all over the place? Are the headers constructed by a php script? (You said "example.php" but this is the HTML forum, so I can't tell :()
If the files are distinguished in some obvious way, it should be pretty trivial to write a couple of extra lines of php telling it to look at the name of the requesting file. If the name includes /openfolder/ use the OK-to-index version. If the name includes /closedfolder/ use the noindex version.
If the whole header is assembled on the fly, then of course you wouldn't need the different versions. You'd just have that one extra <meta blahblah noindex> tag, either included or omitted.
head tag is on the index.php (maybe this really should be on the php section of ww) then some folders deep a little like wordpress we have the theme, one page there is picture.php where i would like to have a meta noindex, but there header from index.php is all over.
incrediBILL - it dont have to be a header solution, but robots.txt would not work be cause block each page will take 2 Month. if there is another solution to have a no index on my final pages as I call it, that would also be great.
I don't think NOINDEX works any faster if that's your only criteria. I've had robots.txt blocking alter what's in Google's index pretty fast in some instances, not so fast in others. However, if you want to speed up the process, after altering robots.txt or whatever, go into WMTs and use Crawler Access -> Remove URLs to get rid of them quickly.
I apologize. I was in a crazy mood the other day, but I didn't want to put anybody on the spot.
I did a little research and found some information about what I had in mind. I intended to suggest that a <Files> directive could be used to manage this issue in one fell swoop. You could use something like:
<Files "contact.html"> Header set X-Robots-Tag "noindex" </Files>
This example only works with the named file, but you can expand it to use regular expressions by preceding the file name with a "~" or by using <FilesMatch>. If you wanted to assign this header to a directory, you could use the <Directory> directive instead as well. Both Google and Bing support this feature.
Oh, and going back to the OP, plus his later explanation of why robots.txt won't cut it:
No matter what approach you take, if you want the pages to disappear right away, you will have to go into gwt and remove them manually. The "noindex" tag by itself won't make search engines delete the file from existing indexes; nothing will change until the next time they come looking for the file.