I always change robots.txt at least 48 hours before the URLs that are not meant to be accessed go live. That's because Google might check the robots.txt file for changes only a few times per week.
The robots.txt disallow says to not spider the pages. The URLs can still appear as URL-only entries in SERPs.
If you don't want anything to appear in the search results, not even a URL-only entry, then you are better off using the meta robots noindex tag instead. That takes effect immediately the page is spidered.