Forum Moderators: open
i have always had the impression that finding links to specific pages and also the corresponding toolbar function don't work properly.
now i just tried to search for link:domain and to my surprise i got exactly the same 16 results as when i search for "link domain" (incl. quotation marks).
isn't this working fine or am i just to supid?
muesli
You need to include the entire URL. For example, if you wanted to find links to Yahoo link:yahoo.com wouldn't work. Instead, you'd need to use link:www.yahoo.com . If you find this hard to remember, copy and paste the entire URL:
link:http://www.yahoo.com . Google doesn't mind the http:// part. You can only find links to a plain domain (yahoo.com) if the site itself uses that domain without any prefix.
Hope this helps. Let me know if you have any questions.
sorry, i might haven't explained myself well enough. when i do what you describe i get less than 600 pages but there are 100,000+ pages in the index that contain a link to this very page (spelling absolutely identical).
so i read somewhere in another thread (don't remember where) that link:domain was the solution.
what an annoying experience to see the link: command being treated as if it was a keyword..
so i might add a question:
muesli
Yes. Google will typically only show links from PR4 sites and above. If those other 99,400 links are PR3 or below sites, you won't find them using the link command.
I just enter the domain in the search field like this www.domain.com. From the next available choices, I'll select the "contain the term" option and that will usually show where the links are no matter what the PR is.
The corresponding query looks like this...
"+www.domain.+com"
thank you for this insight, i wasn't aware of that.
if google only awards PR credit for links on PR4+ pages this would mean a major difference to the original PR formula. (wouldn't it also mean that the formula wouldn't be able to calculate a virgin system, where all pages start with PR1?)
this theory (or has it been veryfied?) would also make sense in a feasabilty point of view: the PR iterations would have to be calculated for far less pages (only those which have at least one PR4 link, if i understand it correctly) and taking into account far (in my case sth like 1:40) less links. makes sense. (as you have to be 18 to vote, a page has to have PR4 to "vote".)
this would also be a possible answer to a theory grumpus brought up in a different thread [webmasterworld.com]:
I suspect that large sites don't get internal link benefits simply because it would greatly unbalance the whole scale.(..) Every site with lots of pages would eventually make it to PR9 or PR10.
even huge sites usually haven't so many PR4+ pages, so all internal credit they get would be well deserved - and wouldn't get them PR10.
btw, i just tried to find out how many PR4+ pages mine has but the search "link:www.domain.com site:domain.com" yielded no results although i know at least two dozen (tollbar guesses definitely excluded;-).
muesli
Add a space after link:
link: www.yourdomain.com site:www.yourdomain.com
I just picked this one up here within the past week. Great find by another member.
Logical? Yes. If you look at the query, you are requesting to view all links with that domain string. Then you are asking that the results only come from a particular site, in this instance, your own.
> link:www.domain.com
In this instance, you are requesting to view all links to your domain that Google calculates PR4+ for which is typical, there may be certain instances where this might not apply.
Many are unaware that Google only shows links from PR4 and above sites. There are other SE's like Fast/ATW who will return a more detailed listing of links since PR is not a factor. In Google's case, PR is one of the top determining factors.
If you look at the query, you are requesting to view all links with that domain string. Then you are asking that the results only come from a particular site, in this instance, your own.
yes, just that google would have to give me 90,000 pages as a result to this, not 3,200. there are 88,900 pages of my site in the index and all carry the link to my frontpage.
muesli
> In order to show you the most relevant results, we have omitted some entries very similar to the 3200 already displayed. If you like, you can repeat the search with the omitted results included.
Click that link and see what happens. I've never had to look at anything with 90,000 pages so I'm not sure what Google does in this instance.
try getting to the end of those 3200 results and look for this last entry...
> In order to show you the most relevant results, we have omitted some entries very similar to the 3200 already displayed. If you like, you can repeat the search with the omitted results included.
was my first thought, too, but that's not the reason. the text shows "we have omitted some entries very similar to the 906 already displayed" (while 3200 was displayed as result number already). when i go to the end of the "expanded" serach result i end up at page 100 whith no "next" link. naturally.
so it's clear: google knows the 90,000 but somehow considers only 3200 when searching for the link. strange.
muesli
> link:www.domain.com site:www.domain.com
> link: www.domain.com site:www.domain.com
Notice the difference? One does not have a space between link:www, the other does; link: www.
If you do this...
> link: www.domain.com -site:www.domain.com
You will see links from all other resources except the one you've excluded. Great for finding out how many external links Google is showing while excluding -site:www.domain.com.
i stopped believing that the trick with the space works. as far as i can see from the results "link:" is just treated as a keyword ("link") not a command.
that also explains the 3200 results: those are the pages where "domain", "com" and "link" appear.
sorry for that.
muesli
I don't believe you're doing an actual link: search.
Do a search for link:www.google.com . That's a link count. You'll get about 329,000 results. Now do a search for
link: www.google.com -site:google.com. You'll get
about 99K results and the notice that www wasn't searched for because it's too common. That should tip you off right there that this is not a link search, but let's continue. Add a + in front of the www and you have link: +www.google.com -site:google.com This'll give you 61K results and the first one is AltaVista. No way AltaVista is linking to Google. In fact, you'll see that if you check the cache.
If you insert a space and run the link: search by itself, it'll run. But the instant you introduce another special syntax, like site, you're changing the search from "search for pages linking to google" to "search for the words link, and www.google.com, and specify that they must not be at Google.com."
Let me give you another example. Search for
link:the.yahoo.com. You'll get zero results. Search for link: the.yahoo.com . You'll get zero results. Now search for
link: the.yahoo.com -site:yahoo.com .
You'll get over a million results because the
changing nature of the search has marked "the"
as a stopword and removed it.
My apologies for providing misinformation. > Goes to corner to pout and kicks himself in buttocks on the way! Happy Birthday to me.
P.S. It sure would be nice if someone would just come along and say; "hey pageoneresults, the backward links syntax you are so excited about does not work. What do we have to do to make it clear to you? You dingbat." Thanks for being so kind!
muesli's not a dingbat either, but I'll disagree with this...
> ...as you have to be 18 to vote, a page has to have PR4 to "vote".)
You need approx PR4 to show in a "link:" search, but you do can pass on PR ("vote") with much less.
In my estimation, the voting stops at about PR-3 or PR-4 (which is possible because of the log scale).
You need approx PR4 to show in a "link:" search, but you do can pass on PR ("vote") with much less.In my estimation, the voting stops at about PR-3 or PR-4 (which is possible because of the log scale).
as i understand it (probably wrong) you contradict yourself in those two sentences. first you say the PR4-estimation is wrong, then you say you estimate PR3 or PR4 yourself.
probably a misunderstanding, could you clearify?
muesli
My own estimation that I mention, is that I think that pages stop counting when they each approximately minus four on the Toolbar scale (which shows as 0).
I chose my words badly, does it make more sense now?
It's just a working theory, I don't want to mention it too much in case everyone thinks I'm crazy (maybe they do, maybe I am...)
I might may be drawing the wrong conclusion based on seeing the results of a 'max number of pages crawled' process, which would most likely produce very similar results to a 'min PR for crawling' process.