Here's a perplexing one.
Our css files are on a different domain. There is no robots.txt file. Any file/page that doesn't exist returns a 403 (access denied) instead of a 404. Google claims they treat a robots.txt with a 4xx response as - go for it and crawl the site. Files that do exist, like the css files on that domain, return a 200.
So until recently (pre April 19), the css files on that domain have been accessible. Nothing has changed on our end, but now the css files are coming back as "blocked by robots.txt" in Fetch as Google or mobile-friendly tests. This also makes no sense, since there is no robots.txt on that domain.
This started happening sometime between 4/19 and 4/24 and continues
So,
1. Suddenly css files are seen as blocked, when they are indeed still accessible.
2. Google is saying the css files are blocked by a robots.txt file that doesnt exist (but does have, and always has had, a 403 response, if that's relevant to anything).
Any thoughts/ideas around this? Similar weirdnesses? Especially seeing a difference in how Google treats a 403?