Sorry to say that not only Google most of the search engines first visit should be robots.txt of the website, and then if not goes to inner pages and looking for <Meta= robots function. Many times we confused the search engine not to index the particular pages of the site (my experience here). If the page is goes to 403 forbidden errors, search engine not able to crawl. First we need to make it as 404 page from 403 forbidden and then use the URL in disallow sources on robots.txt file after that see the results
Always welcome for your question.
Thanks and happy day.