The "Allow" directive is only supported by *some* of the major search engines' robots, since "Allow" is not part of the Standard for Robot Exclusion [robotstxt.org] "specification." Other robots may ignore it (resulting in them crawling your entire site, as you wish), or some of them may treat it as a fatal error and not crawl your site at all -- There's no telling which.
The second one will Disallow robots from fetching any URL on your site which starts with "/" -- In other words, it will Disallow *all* URLs on your site.
You have three choices:
1) Delete your robots.txt file, and put up with all of the 404-Not Found errors in your logs and the skewing of your "Site Statistics" reports because of these errors.
2) Upload a blank robots.txt file. This is perfectly-acceptable and allows all robots to crawl the site, while preventing the aforementioned 404 errors.
3) Use the correct syntax to explicitly allow all URLs to be fetched:
User-agent: * Disallow:
Note that the Disallow argument is blank, and that there is a blank line at the end of this file.