Welcome to WebmasterWorld Guest from 126.96.36.199
Forum Moderators: open
I use full addresses on my <a href page links (http://etc) rather than local root. I'm assuming most of the other sites are using DHTML, Flash, etc., links that are not read by the engines. Is there a reason why some sites internal pages are listed as backwards links? Is it very beneficial?
Each of your content pages needs a link back to your homepage that is DESCRIPTIVE not just "Home". Google evaluates the word Home and thinks there is no relavence to your website. The best way I have found to do this is to make your logo a link back to your index and have some descriptive ALT text. I had this for years but never put in alt text. When I put in some descriptive alt text all of my internal pages showed up as backlinks! Magic!
Also these internal backlinks were PR0's that showed up in Google
gives the same SERP as
"link www webmasterworld com" -site:www.webmasterworld.com
That is because the query commands
The form I suggested filters out all internal backlinks but doesn't add the ones they didn't show anyway.
Google never shows all backlinks.
I'm convinced the link-command cannot be combined with other commands/options/etc.
I tend to agree with takagi on this. I am starting to feel that there is no method to see only the external backlinks for a site. Is that true?
My main problem is that Google shows only 1000 results. One of my competitor sites has around 2000 backlinks in this update and I wish to follow them. If I could somehow filter out the internal backlinks, I would be able to explore the maximum of those sites. Any ideas on how I can achieve this?
In the SERP you can see all pages known to AllTheWeb that point to the specified site (home page and sub pages). To exclude internal links, search for
AllTheWeb shows up to 4000 pages in the SERPs! Please do realize that Google might know more/other links than AllTheWeb.
I tried the link: -site: combination with Webmasterworld and I got a few results (which puzzled me).
A closer inspection reveales that this command is actually the -site: operator at work, searching for the text link:url. Sorry.
There are 2 variations of the link: command. One looks like link:jn18yRfxWN0J:www.webmasterworld.com/ . This is the one you get after searching for the url and then clicking on 'link to'.
The other variation is the simple link:url which gives the same results. This is the one I used to get those reults in combination with -site:.
Imaster, you could use 'contain the term' to reveal more links, but that'a lot of work I'm afraid :(
For months the link: command showed no results and site was PR0. The site is still PR0 now.
The inurl: command listed every page of the site as well as several pages from other sites and directories (including an ODP clone, Joe Ant, and a Joe Ant clone) that linked to the site.
A few weeks back (or more?), the link: command started showing links from two other sites that linked to this one. Those links were in place very soon after the site originally went live. The link: command still didn't show any of the more recent links that I know exist out there. It didn't even show any of those that it already know about and had previously shown when using the inurl: command.
At least a week ago, the inurl: results changed such that only pages from the site itself were listed, and still remains that way.
In the last few days the link: command on cw and gv has suddenly started listing all of the internal pages of the site, and a link from the ODP from late May. It does not list the other links that it had been previously showing with either the inurl: or the link: command.
There has been an update of some sort but the data is still incomplete or broken.