A lot of product-pages lead to our booking formular which then have a different url for each product & different selections for the product, e.G. www.example.com/product-1/selection-1, www.example.com/product-1/selection-2 etc., these pages have only inputs and no content, the URLs are still static.
We always blocked these pages in the robots.txt from being crawled to safe up some crawling budget, but they have to be linked on our product pages. So G still finds them & indexes them, because due to the settings in the robots.txt it can't read the noindex tag on the booking formular pages.
So I was wondering: How would you handle these pages? Leave them disallowed in the robots.txt to safe crawling budget or remove them from robots.txt to make sure they are crawled & no in the index of G?
Plus, is there a way to still find out the crawl budget? I've seen Screenshots from the old Search Console where it was shown, but can't seem to find the information in the newer version of it.