|Microsoft's Bing Could Improve Significantly If.|
| 5:11 pm on Mar 13, 2012 (gmt 0)|
My top suggestion would be if Bing improved its spidering vs indexing. Bingbot is busy, but it often doesn't find the updates and changes.
My second suggestion would be new sites. It just doesn't pick them up fast enough, if at all. My suggestion would be that they index the new material, then, if after it's failed to meet the minimum levels required, it drops down the ranking. At least the new site would have half a chance.
Note, i'm not talking about spam, i'm talking about genuine sites. The spam filter should deal with anything that isn't legitimate.
Perhaps those two things are tougher than we think to implement.
Any other suggestions for the Microsoft Bing team?
| 11:59 pm on Mar 14, 2012 (gmt 0)|
If it matters, I've seen bing really increasing in index speed a lot lately.. For example their news index is rather quick and fairly busy sites seem to have new content indexed in minutes..
It was only last year they announced their new backend system so it sounds like its scaling as they expected and i'm sure their index will only improve.
| 12:17 am on Mar 15, 2012 (gmt 0)|
Maybe if it spent more time reading pages and less time feeding its appetite for robots.txt...
(This can't just be me. I pulled some random pieces of raw logs to re-check. Out of 15 bingbot requests in one batch, 11 were for robots.txt. Elsewhere: 7 of 9
:: sitting on hands ::
3 of 8, 5 of 12. Less than 50% is headline news.)
| 12:26 am on Mar 15, 2012 (gmt 0)|
They have such a BIG CHANCE to get closer to Google right now, Google only have negative Press the last 1-2 years, so its time for Bing to notify users that here you have Privacy, we dont just focus on ads and collecting data about each user, also we want to work with webmasters not against, that should be there motto. We still focus on good search results, but serious I dont think they will grab the chance they will make the same mistakes as Google, they are already saving search history, which you can turn off and on and stuffing there search results with other then results, like images.....
| 3:18 am on Mar 15, 2012 (gmt 0)|
I don't understand why we talk about privacy again and again; without being objective: what's the affecting problem with privacy?
|B to notify users that here you have Privacy, |
G is really very much competitive and tough to get/remain their indexed, this is the only reason, for webmasters being hopeless.
|, also we want to work with webmasters not against, |
One thing I am feeling about G is that they need to be more sleek, putting more ADs and graphics, a lot of menus, looks annoying a bit!
| 2:59 pm on Mar 15, 2012 (gmt 0)|
This thread is about how Bing could improve. ;)
Let's be constructive.
I still say freshness should be a big aim. It spiders and doesn't always index.
| 3:59 pm on Mar 15, 2012 (gmt 0)|
I completely agree. Other search engines often show five to ten times the number of results for the same site:example.com query compared to Bing. Sometimes I think they intentionally limit the number of documents they search over just to improve on perceived relevance. It's technically a lot easier to get ten half way decent results when you're search over a smaller subset of the web.
| 12:56 am on Mar 16, 2012 (gmt 0)|
They need to clean up some obvious legacy stuff that'll make everyone happy before bolting yet more garbage on top. The problem with Bing and all the other SEs is they keep trying to one-up each other with new features yet the bozos never go back and try to finally tackle some real common core issues that have been a problem with their services since the beginning. Adding fancy webmaster tools and such is nice, but if you can't get more people using the search, it's a big fat why bother.
I think if Bing started by focusing on just the following:
- Original authorship detection, several simple solutions possible, one as simple as creating an independent non-profit verified authorship registry. Let authors pay a nominal fee to register sites creating original content, hire a few people to do the validation, kind of how Yahoo Directory originally worked.
- Spam and dupe content detection
- Stop crawling and indexing via proxies (idiots)
- Include more authoritative curated content and/or allow searchers to assist curating content and use it as a method of training the algo as to the intent of the search vs. the desired result. Basically a reciprocal of suggesting search terms to the user as in allowing the user to suggest the best results for those terms to the SE itself.
- Stop running all the paranoid abusive spider crap that webmasters complain about endlessly in the spider forum, it's bad PR at a minimum and millions of annoyed webmasters tend not to promote your service.
If Bing can crack a few of those nuts, making their spidering more intelligent, faster, more secure, less annoying, etc. they'd certainly make webmasters happy and in the process probably make surfers happy as well.
I think the task is doable but may not be 100% algorithmic and Bing may need a take a more integrated curated approach to the problem and let their audience participate in the end results.
Aggregator sites are the real problem as many are useful by integrating, sorting and filtering multiple feeds into something digestible but just how many do you need on a specific topic?
Let the end users decide, either on a user-by-user basis or simply consolidate the preferences of all the users and dump the rest of the junk at the bottom of the SERPs, way down.
| 1:47 am on Mar 16, 2012 (gmt 0)|
In the last 7 months or so that i've had my default search set to bing, i've had to resort to Google less and less
Methinks the bing folk have identified googles advantages such as the power of the caffiene infrastructure quite openly and now appear to been catching up
Methinks they do need to advance their ability to handle a bigger data set
Win over Googles army of fans, especially those on www :)
develop methodologies for protecting original content generators(like in the incredibill post above)
| 2:20 am on Mar 16, 2012 (gmt 0)|
I have never heard of Bing for the last 6 months.
I had forgotten all about them.
Unfortunately when something is free (like Google is) it is hard to compete.
Perhaps they should do a deal with Firefox and make Bing the default.
| 5:03 am on Mar 16, 2012 (gmt 0)|
Do they have webmasters tools, analytics and domain email?
If not, they they may get it done but get it done better than G else we won't migrate ;-)
| 10:19 am on Mar 16, 2012 (gmt 0)|
anshul - they do have Bing webmaster tools, you can "bing" it
| 8:35 pm on Mar 21, 2012 (gmt 0)|
One other thing i'd like to add is that Microsoft/Bing needs to reach out better to Webmasters.
There must be developments going on, but we just don't hear of them.
Bing - reach out, please.
| 8:43 pm on Mar 21, 2012 (gmt 0)|
A weeks ago, I submitted a site with 1000+ pages to both Bing and Google. So far Google has 200+ in its index while Bing only indexed the home page. The domain is about 3 months old.
It would be a plus for Bing if they could buy more hardware or whatever to speed up their indexing.
Another site that I run is about a year old after a domain change. Google sends 50x the traffic of Bing to the site. I don't know if Bing searchers are quite different from Google searchers. What I feel is that Bing search algo favors older sites much more than Google.
It would be another plus if Bing levels the playing field a bit for newer domains.
| 10:11 am on Mar 22, 2012 (gmt 0)|
To all the suggestions that Bing should "do more". I agree.
For a topic on Observations on Search Engine and Crawler behaviors, I a while back tracked what they were doing.
Google basically do back-end prioritizations. Scrape everything, index almost everything into levels of DBs with various access prioritization, and mainly prioritize on search what to actually show. On the back-end of the data-stream.
Bing, with less infra-structure investment, do front-end prioritization. They "pick" which sites/pages to even load, based on ideas about the site overall and maybe guesses from neighboring pages that they have seen already. They index only a tiny fraction of the Internet, as compared to Google. It can literally take many years if they were to find everything, if they would even try.
For example, I have old site, with a lot of distinct, unique product related pages. At one point, when I was tracking the engine behaviors, Google had 30,000+ of the site's pages indexed, and still ran like crazy to pick up more. Bing on the other hand then had a total of an enormous 38 (yes, thirty-eight) pages in its index. They obviously had decided not to be too interested in looking, although they still visit (re-visit) but not in a systemic way to get new content.
I just checked both a minute ago, and Google estimate it now has 44,300 pages, while Bing has managed to get all the way up to an exact 41. :)
Across all sites, I see the same experience. Bing take forever to get started (you add a site to their WebMaster tools, and it can take weeks before the first visit). It then takes forever to crawl, and lifts only a fraction of a site, maybe unless there is a potential for revenue for them.
You cannot use knowledge you do not have access to.
Which leads me to the one thing thing about prioritization methods (back or front). One thing will always be true:
If a search engine has never loaded, let alone indexed a page, it by definition cannot show it to its users. It cannot find what it does not know about.
That tells you what it is worth to users that happen to search on topics Bing does not care about. They have no chance of finding what they are looking for.
So, if you on a front-end prio. SE search for main-stream topics, you are sure to find something (but you don't know what you are missing because the SE never checked it out). If you search for more obscure topics, there is little chance that it will be able to help you. Again, the SE cannot search what it never lifted.
A back-end prio. SE, such as Google, having scraped every nook and cranny of the web, has the ability to search even all the strange, obscure topics. Those that show up with not a single ad around them when you search. It even has the ability to change its topical prioritization on the fly, should something that was of no "interest" yesterday suddenly for some reason become important tomorrow. Maybe because of some major global event, or someone old/famous dying suddenly making every obscure fan-page relevant.
But, operating that way takes a LOT of infra-structure.
| 1:14 pm on Mar 22, 2012 (gmt 0)|
DeeCee, I couldn't agree more with you.
I'm seeing the same indexing pattern between Bing & Google on one of my sites. The latter indexes everything and sends visitors looking for the most obscure searches (relevant though) while Bing hasn't passed the 10% mark on indexing = visitors = none
Just this morning I was considering disallowing Bing access to the site, they are a total waste of space.
I'm off to do it now ;o)
| 6:02 pm on Mar 22, 2012 (gmt 0)|
Well.. The thing about that is that there is really no reason to do so.
Blocking some other SEs, if they send no (relevant) visitors, such as for example Baidu, fake Baidu, Sogou and others can help you quite a bit, both with site performance and other. I have seen times, where if you have a large site, in some cases it can save your performance entirely for better use. One case, where that shaved 20-25% off overall site load.
But blocking an SE for the "sending no visitors" reason has little to no meaning, if they are not bothering your site otherwise. In that case you might as well just take what you can get.
[edited by: DeeCee at 6:39 pm (utc) on Mar 22, 2012]
| 6:29 pm on Mar 22, 2012 (gmt 0)|
DeeCee I agree with you. I have been a bot hunter for some 10 years now, it's like a sport, and no bot enters my sites unless I let it in.
The case with Bing on this particular site is that their bot aggravates me. Asking more times for robots.txt than pages and sending too many non-Bingbot UAs (which I treat like any other rogue bot) will not get this site indexed and consequently get visitors from Bing.
Therefore blocking their bot all together makes me feel better and that's worth something too ;o)
| 7:01 pm on Mar 22, 2012 (gmt 0)|
If it makes you feel better, then it is definitely worth it. In the frustrating fight against bots and scrapers, any minute thing that makes it less frustrating has inherent value. :)
I too treat all "machine visitors" the same way. If they cannot convince me that they add value to me as a site-owner, then they can get a kick in the pants. Whether they are otherwise "valid" or not.
That is one of the frustrations I have with many of the kiddy Twitter Swarmer site/bot owners out there, who think that specifically their "contribution" to the Social Networking mess is the best gift to the world since the Sun.
You go to the "crawler page" listed in their UA. Rather than documenting the value they give to site-owners, in exposure, visitors or other; in short why I should let them in; they instead show basically a copy of their investor presentation. Repeating why their content theft and tracking of the world's content is a good investment. They do not realize that the crawler page is a different audience, and that I am not wanting to invest. Still they want me to allow my paid for network and server bandwidth to be taken away supporting their project and their business.