Forum Moderators: open
Both methods are fetched from the script's environment table, but GET ends up in the QUERY_STRING variable, while POST is fetched from stdin (standard input) after determining the content length.
Any book on CGI will explain this.
Another difference is that GET, since it is part of the URL, ends up in the server logs. POST data does not make it into the server logs, and provides extra security for that reason.
Welcome to wmw
* as a rule php, asp, and Perl using CGI.pm is fine. Some Perl apps will be written to parse _only_ the query string to get the variables - these would break.
Yes, it all depends on how the server-side routines are written. Because the two methods [GET and POST] will encode the data differently, the server needs to retrieve it differently -- so you may need to change the server-side scripting.
However, as gethan mentioned, the subroutines on your server may have already take that into account and you won't need to change anything.
I'd say test it, and if you run into trouble, the folks in our Server-side Scripting forum will be here to help.
POST is not sent as part of the URL. Instead it is sent in the HTTP header for the request for the new page.
GET and POST are used in different contexts. For example, if your end user bookmarks a GET page, they will also have the querystring info in their bookmark. POST pages won't bookmark the sent data. A querystring also shows up in the history list.
This has advantages and disadvantages. For example, if you use GET to send the info, and it contains a username and password, someone could look at the history list and retrieve them. If you use POST, they can't. There are also limits on how much info you can put into a querystring. There are almost no limits on a HTTP header.
In the case of a POST method they'll get an ugly warning saying that the data's expired and they'll need to refresh before seeing anything. This also means another round trip to the server.
In the case of GET they'll pop back through the history without a nasty warning or (in most cases) a trip to the server. Ideal for passing search strings etc (which is how nearly all the SEs do it) so that people can back n' forth through the pages.
As always it's a case of picking the right tool for the right job.
Just a side note -- whilst POST data doesn't always make it into the logs, by no means does this make it secure. If you need solid security look into the https protocol
Another difference is that GET, since it is part of the URL, ends up in the server logs. POST data does not make it into the server logs, and provides extra security for that reason.
On a different issue:
One of the advantages of using the GET method, and an important one at that, is the ability of the page to be cached. Useful if people are likely to be using the back and forward buttons through 'action' pages that receive data:
That all depends upon the browser Joshie. According to the [url=ftp.isi.edu/in-notes/rfc2616.txt]HTTP 1.1 specification[/url] section 13.9, Get's are NOT to be considered fresh unless they contain an Expires header. In otherwords, IE and Netscape are wrong.
Additionally, the forward and backward behavior of a browser is also browser independent. Some browsers will "memory cache" forward and backward get/post data - others will not.
Both issues are extremely heated. Some want to string up Berners Lee in the public square for section 13.9 paragraph 2. It's absolutely backwards of what is needed. There is a quiet war going on over it in browser circles. Moz refuses to support the spec on Get's, but Opera is obeying it. What sucks, is neither are considering what is best for users or websites. The only thing they should be doing is cache the stuff unless it is told specifically not too.