Welcome to WebmasterWorld Guest from 184.108.40.206
Forum Moderators: open
Sorry i dont know the reality but i think any domain registration company will not provide any list of the new domain registration in the last month or last year or so... If yes then this is very easy to find the page for anybody if this is a static page. If not then google most probably will not find it through its spider.
THERE IS SOMETHING ELSE ;-)
Very possible for .com / .net data as Google is definately using registration data for expired domains and probably new domains too.
I'll drop a line to my local tld registrar and find out if the Men from the Plex have been knocking on their doors for registration data.
>>there is a site somewhere linking to new registrations and Google picks them up from that.
That's exactly what's happening. There's a bot that Googlebot seems to be hot on the trail of within no time. At least that's how it appears to be.
[edited by: Marcia at 11:12 am (utc) on Aug. 12, 2004]
I'll now be had for duplicate (multiplicate?) content, I imagine.
I don't have the google tool bar..
I didn't try to search for the domain on g-ogle.
I told nobody of the domains and neither were there any pre exisiting links etc etc to the domains.
The domains didn't have a specific host, that is they only had a domain forwarding service setup as I built them to my "liking".... thus I could see traffic and 404 robot errors.
Within a week after registering the names, G-ogle came by triggering 404s . It sort of pissed me off because the domains were still being built and the information was not "pure" for a spider.
The only thing I could guess at was the registrar was leaking the domains to .. 'somebody' .
OR when attempting to register a domain, the search for availability, would trigger something related to g-ogle... but that doesn't make much sense either.
The company I used to register the domains is a subsidiary of V- sign.
It's a great big conspiracy and Matt Sludge is on the case as we speak ;-))
edited and added this tid bit for cimls post below:
I didn't set up hosting before registering.
Domain forwarding was done at the registrar company. You know the type or registrar... "purchase your domain and you get forwarding and 1 email account" etc etc..
[edited by: kahuna at 1:30 pm (utc) on Aug. 12, 2004]
Does anyone remember this?
<<< GoogleGuy roots around on a messy desk looking for the piece of paper that lists the exhaustive history for all domains in the world. >>>
If you install the toolbar for Google and other search engines you will find that they eventually show up at new domains that you have not submitted anywhere yet.
Note that you also leave a trail of where you have been on the internet if you do not block referers in whatever browser you use. So if you are on your new website testing it, and you go to another website that publishes its "most recent referers" list - viola, you will be linked on the web for Google to find you.
Last but not least, they don't have to buy the domain name lists from registries, any respectable company can go into contract with the domain registrars directly to download the core database nightly. All they have to do is watch for changes and then check the new sites, it's fairly simple. Several whois databases do this already such as "whois source" who will visit your new domain a few days after it's registered!
I've got about a dozen domains that I registered 5-6 years ago and may never develop. Why would Google want those "domain parked here" pages in their index?
Guess I'll go check if G knows about those domains....
This is the way some se's
>So what is the benefit to Google to index
> (and spider?) URLs that have no web site content now,
To get the fresh stuff and to track updates to the domain.
That is especially true of Google who knows when an expired domain has changed hands.
Anybody can get hold of a list of the new registrations of .com..net..org.Google doesn't get all the zone files.Yeah the cctld registrars do not give such free access to their zone files. However doing daily updates on com/net/org/info/biz is trivial. It would take approximately an hour on a desktop PC to gnerate a list of new and deleted cnoib domains. From there, it is a simple question of feeding the list to a small pre-indexing program. It is all just a set of very simple SQL statements but the database size is about 30G. Tracking the transits (domains moving between nameservers) takes a few hours but it all should be easy enough even for the turnip fields of PhDs in Google. :)