Forum Moderators: open
e.g.
/dir1/dir2/something.html
/dir1/something_else.html
Very curious indeed.
It started the day with 11-character page names, worked it way up to 17-characters, and mixed together 17 and 18 characters for a while. Next, came page names of 20 characters or more, not in length order (i.e. all lengths over 19 characters were jumbled up).
Then, after pausing for a fresh robots.txt, it started with 6-11 character urls, mixed up, with a few longer ones thrown in (picking up ones it missed earlier?), then again slowly increased the lengths, all 12 character, all 13 etc.
The actual sequence within url length was mostly a batch of pages in one directory in alpha order, then a page or more from elsewhere, then a batch from the previous directory in a fresh alpha sequence (not continuing from before), etc.
For the urls with over 20 characters, the sequence was alpha regardless of length, so it looks like there's a 19 or 20 character index being used somewhere (or a longer index that includes the http: //domain bit).
I don't think short urls would help in any way, it got to the longer ones as well as the short ones. And quickly!