What, exactly, do you mean by track? (Sorry. You know exactly what you mean, but we don't.) The IP is in the raw logs. What you do with the logs depends on you and your log-wrangling software. The busier your site is, the more complicated it gets. If you're dealing with something like public WiFi, you'd have to look at the User Agent to get meaningful information.* And if it's a public terminal, as in a library, you're not likely to get much of anything useful.
* Unless you are the Chinese government and are only concerned with whether the IP exists at all. (Different thread.)
A website cannot see the individual IP addresses for each PC. It sees the IP address of your router.
Is that what you need to find out? How to identify an individual user when there are lots of them going through the same router? You're not spying on your employees are you? In this case you need to look at the UA. They will rarely be identical, unless you buy your computers in batch lots and the only people who can install software are technicians who come round and hit all computers in a single sweep. (This is rare. If I check something on my library computers I can tell which one I used because they were all inherited from different people at different times.) There's a pretty recent thread somewhere hereabouts linking to a site that illustrates just how distinctive your browser is. Probably in the foo forum.
Thanks for your clarification g1smd, @lucy24 what is UA? As I have said I am a newbie so expect me not to have an idea on this at all and where can I find the foo forum thread? Just wanted to know how websites manage to find out my IP address and an internet cafe with multiple PC units. Any further clarifications from you and anyone from this forum will be highly appreciated. Tons of thanks :)
UA stands for user-agent, i.e. the browser. Your browser also gives out a few dozen other nuggets, like screen size, installed flash version, plugins, browsing history, etc... That can also help to uniquely identify your machine.
If you want to take it one step further, you could add persistent flash cookies to the mix.
The information is automatically sent out by the browser (or other "User Agent") before it ever picks up your page. Most browsers also send information about how the user got to your site. If you have a site of your own, you should look at your raw logs. A typical entry will look like this (I'll pick a well-known robot so I don't have to obfuscate anything):
126.96.36.199 - - [09/Aug/2011:02:41:09 -0700] "GET /hovercraft/index.html HTTP/1.1" 403 1272 "-" "Mozilla/5.0 (compatible; DotBot/1.1; http://www.dotnetdotcom.org/, firstname.lastname@example.org)"
This breaks down to:
188.8.131.52 the IP address of the visitor. The number could be anything from an individual human with a fixed IP address (typically a high-speed connection such as a cable modem) to a workplace all going through one great big router
- - I have no idea what the two separate hyphens mean. Someone else will tell us both. I have never seen text in this location.
[09/Aug/2011:02:41:09 -0700] time to the nearest second, in brackets, here expressed in local time with its relationship to, uhm, UTC? (Thing that used to be Greenwich Mean Time.) I happen to live in the same time zone as my server.
"GET /hovercraft/index.html HTTP/1.1" The request sent by browser (or robot, or other) to your site. GET means the whole page. HEAD means basic information about the page. In general, human browsers use HTTP/1.1 while some robots may use 1.0 instead.
403 1272 The first number is the result of the request, here 403 meaning "get lost". This particular robot never got past the htaccess file. The second number is the size in bytes of the file they got instead, here the custom 403 page. (The one humans see if they blunder into an index-less directory.)
"-" in quotation marks is the "Referer" (sic) meaning what they clicked to get to your site, or who asked for the file (images will typically give the page they're on as referer). Robots generally have no referer. But neither do some human browsers, and neither do bookmarks or type-ins.
"Mozilla/5.0 (compatible; DotBot/1.1; http://www.dotnetdotcom.org/, email@example.com)" again in quotes. This is the UA or "User Agent"-- if you're thinking strictly of humans, the browser. If a robot has an unpredictable IP address, they may be locked out by UA instead.
I think spyjunior wants to ask " How the gaming sites or .tv sites who restrict the visitors from other regions like china, india etc check or track the visitors? ". Am i right?
And I shall add one more question. Can the users using VPNs also be tracked by their original IP address?
I am using a remote proxy regularly because of the lousy network I am normally on and the proxy I use is configured to forward no IP address information from the client side to the outside world. The same can hold for VPN configurations.
The IP address of the client cannot be tracked in such a situations but I have encountered sites which blocked me anyway because the IP address of the proxy was located in a data center, rather than on a residential address. I guess this had to do with scraper blocking though, rather than blocking individual visitors from a specific set of countries.
Here I have one question. As you said that the IP address of a client can't be tracked using a proxy or VPN, can Google and other CPA affiliate networks also not track the people who click on their own ads using such IP solutions? If yes, Its very easy to increase adsense earnings.