homepage Welcome to WebmasterWorld Guest from 54.197.215.146
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Visit PubCon.com
Home / Forums Index / Marketing and Biz Dev / SEM Research Topics
Forum Library, Charter, Moderators: phranque

SEM Research Topics Forum

    
Alta Research
seth_wilde




msg:817835
 5:14 pm on Jun 1, 2000 (gmt 0)

Check these out and tell me what you think they mean, (sounds like a major insight into AV's future plans)(Warning! Make sure you have your thinking cap on! You can definitely tell they were written by engineers)

The Term Vector database
http://www9.org/w9cdrom/159/159.html

WTMS: A System for Collecting and Analyzing Topic-Specific Web Information
http://www9.org/w9cdrom/293/293.html

Graph Structure in the web
http://www9.org/w9cdrom/160/160.html

 

James_R




msg:817836
 9:08 pm on Jun 1, 2000 (gmt 0)

wow, this is money (if anyone can decipher)

"Term selection. We select terms for inclusion in a page's vector by the usual Salton TF-IDF methodology (see, for example, [Baeza-Yates et al 99]). That is, we weight a term in the page by dividing the number of times it appears in the page by the number of times it appears in the collection. For a given page, we select the 50 terms with the greatest weight according to this formula."

Is this how they establish "themes" for the site? So pages are indentified by these vectors rather than URL's it seems. Maybe they compare vectors against each other, weighing in off page statistics (external vectors pointing to the page) and then come up with the top ranking pages. Sounds interesting and am merely rambling off the top of my head, but just ran across this comment,

"Despite its usefulness, term vector information for Web pages is not readily available."

although seems to be in progress ....

"To support such applications, we have built the Term Vector Database. We have populated it with term vectors for all the pages in the AltaVista index."

Great find Seth, useful for the future or even understanding the current engineering approach.

James_R




msg:817837
 9:25 pm on Jun 1, 2000 (gmt 0)

Even more interesting, did a little search on TD-IDF weighting that they spoke of in this article:

"We select terms for inclusion in a page's vector by the usual Salton TF-IDF methodology"

Here is a document on the TD-IDF weighting, maybe not the one they are referring to but a similar approach:

http://www.fundp.ac.be/~lgoffine/Hypertext/semantic_links.html

Check out section 2.2 Weight computation. It is a simple inverse function of mapping a set of documents (or a site) to give a vector score by comparing a page with the entire site relating to the weight of the keyword term. (Sounds like the mathematical explanation to Brett's themes) Now, if only we had all the other ranking factors like boosts if the keyword appears in location "x" or the boost from site "a" that links to site "b" and that vector score and so forth, you'd have yourselves a rudimentary algorithm.

seth_wilde




msg:817838
 9:40 pm on Jun 1, 2000 (gmt 0)

I think this research is definitely about themes and link popularity. I guess this would be proof for the nay sayers but I would prefer if this info was kept to this forum.

This looks like it could be how they score a sites theme relevance
"the big jump at "50" occurs because the page filter clips all larger vectors to 50 terms""45% of terms in a vector have a count of one; 99% of terms in a vector have a count of at most 31"

This looks like how themes and link popularit work together

"Topic distillation is a technique of using hyperlink connectivity to improve
ranking of web search results (see [Kleinberg 98], [Chakrabarti et al 98], [Bharat et al 98] for specific instances of topic distillation algorithms). These algorithms work on the assumption that
some of the best pages on a query topic are highly connected pages in a subgraph of the web that is relevant to the topic. A simple mechanism for constructing this query-specific subgraph is
to seed the subgraph with top-ranked pages from a standard search engine and then expand this set with other pages in vicinity of the seed set ([Kleinberg 98]). However, this expanded set
sometimes includes highly connected pages that are not relevant to the query topic, reducing the precision of the final result. [Bharat et al 98b] identifies this problem as topic drift.

[Bharat et al 98b] shows that topic drift can be avoided by using topic vectors to filter the expanded subgraph. A topic vector is a term vector computed from the text of pages in the seed set.
A page is allowed to remain in the expanded graph only if its term vector is a good match to this topic vector. Specifically, the inner product of the two vectors is compared to a suitable relevance
threshold, and pages below the threshold are expunged from the expanded graph."

rcjordan




msg:817839
 9:49 pm on Jun 1, 2000 (gmt 0)

re: if only we had all the other ranking factors like boosts

I think the vector-based ranking carries so much weight that it almost relegates the other factors to sidebars. I have #1 ranking in AV (and #2 in Raging ??) with a travel page that is full of flaws, some of them very serious ones. About all it DOES do, is fit the "themes" mold. I don't keep posting about this particular page's rank looking for cheap applause. I came to SEW on the themes thread because -prior to finding it- I was puzzled as to just why this page was climbing the charts again after being severely punished for an overload of javascript. Everything I read said it was doomed, doomed, doomed --yet it kept climbing.

BTW, it now outranks the most popular site as determined by link count from other sites. This competitive site has 5 times the number of incoming links.

James_R




msg:817840
 12:05 am on Jun 2, 2000 (gmt 0)

rcjordan

Themes is what made me come here too. Brett has long been the voice over at forums that has made sense, everything else had been degrading for us until I picked up on this themes thing and we are right back in business.

Seth

I don't really post at all anywhere else as I am not thrilled about giving out secrets. I am willing to somewhat here as Brett has been so much help to me I figure I owe to give a little back (for what it is worth). If any of you want to communicate off the record, feel free to send me an email at M_Neo111@go.com . I think this vector thing will pan out huge if we can start thinking like the engineers. All that I read today has really been enlightening, how the engines are just building off of old information retrieval concepts. I really should have seen it sooner but usually there is not much time to research.

Let's keep plugging away and see what we can find.

Pete




msg:817841
 1:00 pm on Jun 5, 2000 (gmt 0)

Brother, more bed time reading!

This looks hell of a interesting. Seth, interested to know where you found this. I like you guys, moved over here because of Brett. Helped me enormously with figuring out inks. algo in the VP forum when AOL dropped excite.

One cause for concern. A number of people got there pages nuked from the AV database for disclosing too much about AV in the VP forums. Is there any danger of that happening here?

seth_wilde




msg:817842
 4:30 pm on Jun 5, 2000 (gmt 0)

Pete,

I was doing some research about the "bowtie theory" in hopes of getting the inside track on Av's use of link popularity (graph structure of the web). When I happened to come across a link with "term vector" in it's name. As soon as I saw it I remebered it from a newsletter that I read in april that had some rumors of AV's future plans.
[submitexpress.com...]

It's a good thing that you reminded me of this, because now that i actually went back and re-read the newsletter, I realized there are some things that I had forgotten about. In the newsletter it talks about av's future use of

- Link Popularity
- Click Popularity
- Ownership of Domain
- Content
- Term Vector

If they were right about the Term Vector (even though they didn't know what it meant) could they be right about the Click Popularity, and Domain Ownership?

Brett_Tabke




msg:817843
 5:27 pm on Jun 5, 2000 (gmt 0)

Well, it came from the same source you got it Seth. That w9 paper by Alta had been floating around since Mid March. I'm on Alta's press release and advertiser mail lists - they'd printed half of it in an info pack in March.

seth_wilde




msg:817844
 7:01 pm on Jun 5, 2000 (gmt 0)

The newsletter had claimed to get their info from an AV rep at the Internet World Convention (Los Angeles). I would assume that if they had the www9 papers they would have been able to give a better definition of term vector (unless they were just being tricky).

Brett_Tabke




msg:817845
 8:32 pm on Jun 5, 2000 (gmt 0)

It was a promo flyer for the w9 they were handing out. It had two pages of excerpts from it. I've got it at the office I think. I'll scan it.

Brett_Tabke




msg:817846
 7:45 am on Jun 8, 2000 (gmt 0)

I did look and I don't know where I filled it - not where I thought it was.

You know, if you read that article on Topic extraction...wow, that is quite a road map. If you understand even half of what they are talking about, read that article folks...that is completely awesome. If you back up to the contents and start reading, it is (to a search engine junkie) fascinating. Must be 30 documents there that is required reading.

engine




msg:817847
 8:02 am on Jun 8, 2000 (gmt 0)

I'd agree with Brett - well worth reading.
There are some great pointers in there and if you don't read in too deep I reckon there is a few nuts cracked open.

I've already modified one of my pages, slightly, and will wait for natural spider revisits and indexing to see the results.

Brett_Tabke




msg:817848
 8:23 am on Jun 8, 2000 (gmt 0)

another great paper (notice the mention of Mercator!)

And this one on Content Relationships (sure reads like themes to me).

And lastly, my favorite Computing Web Page Reputations. (if that aint THEMES, I don't know what is).

From what I've studied so far, I think there are some bottom lines:

1) my new mantra: focus focus focus. Focus on single keyword pages related to your theme. I don't think you can over do focusing on a single keyword to build your theme.
2) cross link your site to hell and back. Don't let the indexer decide one page is "less" important than another just because it has fewer links to it within your site.
3) Confuse the indexer into not being able to decide whether to go left or right. I think you want the indexer to end up with a 'gray' area about your site. If you don't, they can pigeon hole you into a theme you don't belong in (like sew is still tops for Exotic car engines - why? I let them see "exotic", "car", and "engine" on several different pages. They decided the site was about 'exotic car engines' becuase of all the related words). If I'd gone for the 'gray area' approach, they wouldn't have been able to deduce that.
4) file this one under: 'hair brained scheme of the day'. Remember back 95-96 when everyone was putting "keywords: xyz" right on the page? Notice how all those w9 papers have the same thing? I just went and looked as several high ranking pages under weird terms here on sew, and you know what? - They were all pages about "keywords xyz" (with xyz being the weird term). hmmm. sure makes me wonder if that trick is going to start working again.
5) Sad to say, but links-in-context, really are important. Alta, Excite, Infoseek, Fast and Google haven't figured it out yet, but I'm dead certain Inktomi has link relavancy nailed stone cold.

>JamesR: wow, this is money

hehe. I like that. A belated welcome to the forums James.

James_R




msg:817849
 11:38 pm on Jun 8, 2000 (gmt 0)

Thanks for the welcome Brett. Appreciate greatly all your contributions.

That third paper really hits on some of the objections to the link popularity model and takes it a step further. These guys must have been doing calculus in kindergarten to get an equation that represents the actions of a surfer. (Remember the days when all it took was a simple keyword density analyzer on a page to get a top ranking? Dark ages....) The criteria of deciding what is a "hub" and what is an "authority" would be interesting. How many links in respect to content constitute a hub? It is obvious where a directory would fall. If this gets implemented, I wonder how a commercial business site made largely of product pages would expect to do well in ranking unless there were some partnership with an "authority" site or become the authority through content and clever linking.

Brett, here is a very strong case for your argument of the importance of internal linking. Still on the last paper in section 3.2 "Identifying Topics"

"However, in practice we may not have access to a large crawl of the Web, or we may not be able to afford the full computation. In this section, we show that it is still possible to approximately find the topics a page has a high reputation on, although the ranks will not reflect the real probability distributions. . . .If the page acquires a high rank on an arbitrarily chosen term t within the full computation of Algorithm 1, then at least one of the following must hold: (1) term t appears in page p, (2) many pages on topic t point to p, or (3) there are pages with high reputations on t that point to p. This observation provides us with a practical way of identifying the candidate terms. We simply start from page p and collect all terms that appear in it. We then look at the incoming links of the page and collect all possible terms from those pages. We continue this process until we get to a point where either there is no incoming link or the incoming links have very small effects on the reputations of page p."

In other words, if you have to rank a page on the fly and don't have access to external links, take the same model and apply it internally. This is going to cause us to really think through and carefully position links in a site or you could easily take a wrong turn right off the bat and accidentally mislead a search engine spider.

Here is another important piece further down:

"Again, every page which is not identified in Step 1 is assumed to have a rank of zero. Note that the hubs-and-authorities computation of Kleinberg is a special case of this method; it is based on only identifying pages that either contain term t or are reachable within one link from one such page."

Interesting way to filter. You know what happens when you start multiplying by zero. The "one link away" approach is interesting and could really help to narrow in on who to link to. Can't get a link from Microsoft? That's OK, find out who they link out to and target them. In theory the theme score would filter down.

Also Section 5: Experimental Evaluation

"Only a limited number of incoming links are examined; we obtain at most 500 incoming links of a page, but the number of links returned by the search engine, currently Alta Vista , can be less than that."

AV doesn't seem to have it all together yet but operating this approach on a basic level. All this is pretty incredible. If someone could get some dialogue going with a mathematician, it would be nice to start gathering clear idea of the desired term weight of the entire site. My question is should the term frequency on the site be high or low? It is obvious that the phrase needs to be mentioned on every page but how often? Every instance of the term appearing is going to affect the algorithm in some way. With all the inverting they are doing, I can't tell off hand what would be most favorable.

seth_wilde




msg:817850
 4:19 pm on Jun 9, 2000 (gmt 0)

I also found the "Computing Web Page Reputations" document very interesting, It's not about themes, but more about themes within links, but if you notice this paper wasn't written by any AV engineers and from what I've noticed I don't believe AV is using this. Could it be possible that this is the method that Inktomi is using? It would certainly explain the troubles buddy links has been having.

"If someone could get some dialogue going with a mathematician, it would be nice to start gathering clear idea of the desired term weight of the entire site"

The term vector weight computation is a pretty basic formula (compared to others listed in these articles). I could figure out the term vector weight of a small site pretty easily, the problem would be that we would have nothing to compare it to. We would need to get a sampling of top ranking pages which could get rather difficult when large sites are involved (mainly because you would need to know how many times each word would be used on every page of the site). If you guys would be interested in trying to figure this out we would need to have a spider to collect all this information for us and some place a little more private to figure everything out.

James_R




msg:817851
 9:21 pm on Jun 9, 2000 (gmt 0)

You guys ever notice the relationships between some of the authors? There is some connection between the AV and Google engineers (Broder - AV, Henzinger, Bharat - Google) as their names are on several papers together including some from previous WWW conferences. Just an interesting connection.

Brett_Tabke




msg:817852
 11:43 am on Jun 15, 2000 (gmt 0)

Ya, most of those guys go back to Stanford (Broder, Brin Page, Bharat, and the main analyst for yahoo). I can't think of the main prof at stanford's name, but he sure inspired alot of the current se programmers.

JamesR




msg:817853
 11:47 pm on Jun 26, 2000 (gmt 0)

>I can't think of the main prof at stanford's name, but he sure inspired alot of the current se programmers.

Wouldn't happen to be Terry Winograd would it?

grnidone




msg:817854
 2:03 pm on Jun 28, 2000 (gmt 0)

/me raises her hand.

Could anyone please point me to the thread regarding "Brett's 'themes'".

The one part of the first article I didn't understand is the section 2.2 weight computation. I am hoping 'Brett's themes', which James_R refers to in not so many words as the "non-math" version of the weighting computation, will help me to understand what it is.

Thanks for helping a novice.
-G

Brett_Tabke




msg:817855
 12:18 am on Aug 8, 2000 (gmt 0)

A good research article [sims.berkeley.edu]. This one is a slide show from the School of Information Management and Systems course 202 at Berkeley. For those taking home courses, it is lecture #17 ;)

metaman




msg:817856
 2:41 am on Aug 8, 2000 (gmt 0)

An excellent slide show Brett. In fact, I emailed Ray Larson, the professor who created it. I asked him where I could find more info on algorithms and told him I was really interested in doing more research.

Notice the probabalistic ranking methods along with vector space models in that slide show. Ray says these are more popular.

My question to him was, since that slide show is two years old, has there been any findings as to which method (vector or probabalistic) is more effective for information organization?

Here was his email back to me:

---
These methods are undergoing continuing testing in the NIST TREC program.
Generally the "preferred algorithm" these days is the probabilistically-
based OKAPI BM-25 algorithm.

see [trec.nist.gov ]

Ray Larson
---

NIST - National Institute of Standards and Technology
TREC - Text Retrieval Conference

If any of you are brave enough to venture to that site and wade through the publications section (you thought those term vector database pages were long and confusing), look for the publications by S.E. Robertson. He worked with the Okapi algorithms. BTW, have your adobe readers and distillers (for the older conferences) ready.

There is a math formula that can be found. It is more complex than the TF-IDF equation. Also, it appears to me, similar to the vector based method in that they look at the collection of documents as well which indicates themes. This is what I got out of it, after 45 minutes of staring bug-eyed at a calculus equation and it's variables. :)

Seriously, there is a lot of publications from these conferences that go back to like 1991. I can't spend all the time they require to do the research, but it would be interesting to talk to someone who did. Maybe a student.

Next, I mentioned to Ray that it seemed AltaVista was using a vector-based algorithm and that this would especially make sense since they are at the cutting edge of this stuff.

His reply:
---
I assume that AltaVista is using a vector approach. That still gives
good performance, in general, and many of the research vector systems
are using a form of the OKAPI weights.

Ray Larson
---
So, it seems everyone was on track with term vectors anyway, at least here with Alta. Maybe it was a waste of time but I find this research very interesting.

Just thought I'd share my misadventures in SE research with everyone...

Edited by: metaman

seth_wilde




msg:817857
 3:33 pm on Aug 8, 2000 (gmt 0)

metaman - if you could give me a link to the math formula that you found I would really appreciate it.

Also I've been doing some additional research into the okapi research methods. From everything that I found, it seems as though these methods are still in the experimental stages. I couldn't find any example of commercial uses, however below is a link to the Brits that developed the okapi system, and on their site is a link to an okapi system that's is available through a VT100 interface.

[soi.city.ac.uk...]

I'm still having some doubts about AV using what seems to be a somewhat experimental weighting method, But it's definitely worth further research.

Michael Martinez




msg:817858
 1:21 am on Aug 10, 2000 (gmt 0)

[Angry podium mode on]

After getting two college degrees and coming within 3 classes of getting two more, I reached the conclusion that sometimes the academic community takes itself far to seriously.

I will admit I am still wading through all the hokum that you guys have found on these other Web sites, but basically what it boils down to (so far) is this:

If you create a Web site, send out press releases and get other Web sites to link to you, and CALL YOURSELF AN AUTHORITY on your own Web site, you will get a high ranking in search engines that rely upon page and link relevance.

As I have maintained in other forums, these sorcerous devices for obtaining relevant search results are causing me no end of grief in my own efforts to be ranked highly AND to use the search engines for my own searches.

There is no intelligent way to determine if the content of a Web page is authoritative from the inbound links. Someday someone is going to explain it to these people and they will understand why surfers get frustrated with their irrelevant results.

Brett, I'm here. I won't promise to hang around any longer than I'm hanging around the other forum, but I do appreciate the efforts people are making to share information.

I just wish it weren't so disappointing. You'd think that people who are supposed to earn their livings from designing search engines would apply a little common sense to what they are doing.

And I CAN show how these engines (Alta Vista particularly, right now) are not doing their job because of too much reliance upon this pseudo-science that pretends it can determine the authoritativeness of a Web site.

I am ashamed to admit I even have computer related degrees based on the absolute nonsense I have waded through in these papers. How did these guys ever get doctorates in the first place? Out of a cereal box?

[Angry podium mode off]

Michael Martinez




msg:817859
 2:11 am on Aug 10, 2000 (gmt 0)

Let's look at the nonsense posted here:
[www9.org...]

"Calculating Web page reputations".

These guys obviously haven't looked at any Web surfing behavioral studies. Their base assumptions are all incorrect. There is no such thing as a "random surfer", except for spiders. Surfers looking for information are directed in their link choices by the Web designer's presentation of the links.

Furthermore, no link to any Web page is an indication of the accuracy of the information of that page. All it indicates is that one human being was willing to link to another human being's Web site. The fact of the linkage doesn't imply that the linking Web page (P) is knowledgeable about the topic, let alone authoritative, nor that the linked Web page (Q) is knowledgeable about the topic, let alone authoritative.

1000 links will not make a badly written paper posted on the Web any more knowledgeable or authoritative on a given topic than 1 link.

Finally, and this is where I stop analyzing this nonsense (because, as anyone who has submitted a proof for examination knows, all arguments fail at the first flaw), they write:

"In the setting of the Web, our assumption that a page has at least either one incoming link or one outgoing link may not hold. However, since we are dealing with collections of pages collected by crawling, we feel justified in assuming they all have at least one incoming link."

WRONG. There is absolutely no justification for this secondary assumption. New Web pages are regularly submitted to the search engines by Webmasters to be crawled. The engines do not get to those pages through other pages. The entire concept breaks down because of this one stupid assumption that anyone who has created a Web page and submitted to the search engines knows is not true.

The database is being flooded with pages which break the model. Since the model is broken, the algorithm based on the model won't work as expected. It will, in fact, deliberately and consistently filter out pages that are both relevant and authoritative in favor of POPULAR Web pages which may or may not have merit.

The bottom line is that the best marketer will win in the long run because this model favors a well-marketed Web site over a poorly marketed Web site.

So much for "reputations". Web page reputations mean nothing to the individual surfer, who is more likely to be influenced by wording and tagging, as well as placement of links (and THAT has been established in Web surfing behavioral studies, which I have read, unlike these guys).


Now, on to "identifying topics".

"In this section, we show that it is still possible to approximately find the topics a page has a high reputation on, although the ranks will not reflect the real probability distributions."

Well, considering that their probability functions don't work the way they should, that shouldn't be a real problem for them.

"Given a page p and a parameter d > 0, suppose we want to find the reputations of the page within the one-level influence propagation model. If the page acquires a high rank on an arbitrarily chosen term t within the full computation of Algorithm 1, then at least one of the following must hold: (1) term t appears in page p, (2) many pages on topic t point to p, or (3) there are pages with high reputations on t that point to p...."

First flaw in the argument: they are assuming that they have correctly established the reputations of Web pages. They have not. Hence, everything fails from this point on (and this is the beginning of the algorithm).

But I'll be gentle. I'll continue.

"...This observation provides us with a practical way of identifying the candidate terms. We simply start from page p and collect all terms that appear in it. We then look at the incoming links of the page and collect all possible terms from those pages. We continue this process until we get to a point where either there is no incoming link or the incoming links have very small effects on the reputations of page p. Let us denote the maximum number of iterations by k. The algorithm can be expressed as follows:"

The above included just to document their methodology. Since the methodology relies upon an obviously flawed model, we know it won't produce correct results.


"Duality of Terms and Pages"

Granting that their functions DO return result sets, what follows is essentially correct. But the old Garbage In/Garbage Out principle still applies:

"Indeed, if we fix p in equations 1, 3, 4 to a specific page, we will find the reputation ranks of the page for every possible topic t. We may then report the topics with the highest reputation ranks. If we fix instead t in the same equations to be a specific topic, we will find the reputation ranks of every page on topic t. Again, we may report those pages with high reputation ranks first in the answer to a query."

Hence, they have diluted their database with improperly ranked pages, so all they can do at this point is optimize the search over the bad rankings.


"Experimental Evaluation"

"In this section, we describe a preliminary evaluation of our approach. Since we did not have access to a large crawl of the Web, it was not feasible to do the full rank computations of Section 3.1. We also did not fully implement the approximate algorithms suggested in Section 3.2 due to the limitations imposed by the search engines we used, either on the maximum number of entries returned for a query or on the response time."

Of course, even if they did have access to a "large crawl of the Web", they still would be arbitrarily excluding NEW Web pages, which make up a significant percentage of content in any search engine which accepts submissions (either directly, as Alta Vista and Google do, or indirectly, as Inktomi does).

So, once again, they proceed from a highly flawed model.

Then they don't even bother to implement the full methodology they have just described in the previous sections. May the Food and Drug Administration never approve medications in this fashion. The sloppy approach to validating their incorrect assumptions is simply mind-staggering. That anyone would give credence to this nonsense is even more mind-blowing.


"Known Authoritative Pages"

5.1 Known Authoritative Pages

"In our first experiment, we picked a set of known authoritative pages on queries (java) and (+censorship +net), as reported by Kleinberg's HITS algorithm [11], and computed the topics that each page was an authority on. As shown in Figure 1, the term ``java'' is the most frequent term among pages that point to an authority on Java."

Okay, this test need not be diced in detail. Basically, they picked the test most likely to validate their theory to see if the theory is easily validated. If it had failed this test, they would have stopped and revised their functions (and, who knows, maybe they had to do a fair amount of revision just to pass this test).

"In another experiment, we used Inquirus [14], the NECI meta-search engine, which computes authorities using an unspecified algorithm. We provided Inquirus with the query (``data warehousing'') and set the number of hits to its maximum, which was 1,000, to get the best authorities, as suggested by the system. We picked the top four authorities returned by Inquirus and used our system to compute the topics those pages have high reputations on. The result, as shown in Figure 3, is again consistent with the judgments of Inquirus."

This is a bolder test. They were applying their functions to a randomly selected data set (they didn't know how it would be selected). But notice that they used a term, "data warehousing", that is frequently brought up within the fields of computer science, data processing, and related subjects. These are not fields which are normally subjected to intensive marketing techniques. So this random test set is actually a more predictable test set. The meta search engine was most likely designed by people who make the same kinds of assumptions as these guys.

Since I cannot prove that, all I can say is that this test has questionable value. They have not validated the meta search engine they selected (let alone their selection methodology). The standard of completeness applied to the experimentation is thus very minimal.


"Personal Home Pages"

"In another experiment, we selected a set of personal home pages and used our system to find the high reputation topics for each page. We expected this to describe in some way the reputation of the owner of the page. The results, as shown in Figure 4, can be revealing, but need to be interpreted with some care. Tim Berners-Lee's reputation on the ``History of the Internet,'' Don Knuth's fame on ``TeX'' and ``Latex'' and Jeff Ullman's reputation on ``database systems'' and ``programming languages'' are to be expected. The humour site Dilbert Zone [6] seems to be frequently cited by Don Knuth's fans. Alberto Mendelzon's high reputation on ``data warehousing,'' on the other hand, is mainly due to an online research bibliography he maintains on data warehousing and OLAP in his home page, and not to any merits of his own."

I've looked at the table. Basically, the data set is flawed. It's too small a sample to be of any value. The largest number of links available was 1733 and all four pages evaluated were taken from academic servers (if I may be allowed to include the www.w3.org server in the academic world). Most personal pages are not devoted to computer science related themes, and are not found on academic servers.

Their last test went after Computer Science departments on the Web. So they made absolutely no effort to validate their assumptions against real-world conditions or anything approximating them. In fact, they chose a data set which was most likely to produce the kind of validation they were seeking: Web sites devoted to or created by computer scientists.

It doesn't take a genius to see that these results are stacked and obviously stacked in a very amateurish way. Even the people who claimed to have achieved cold fusion weren't this sloppy and unprofessional in their research.

But, to continue the gentle massaging of egos:

"Limitations"

"There are a number of factors that affect our page reputation computations....

"...The first factor is how well a topic is represented on the Web. A company, for instance, may have a high reputation on a specific topic, or a person may be well known for his or her contribution in a specific field, but their home pages may not receive the same recognition mainly because the topic or the field is not well represented on the Web; or even if it is, it may not be visible among other dominant topics. This can be easily seen in some of our experiments."

The first admission of guilt. Of course, they conveniently neglected to point out the serious flaws in their methodology documented above.

"The second factor is how well pages on a topic are connected to each other. There are two extreme cases that can affect the convergence of a topic in our computations. At one extreme, there are a few pages such as the Microsoft home page (www.microsoft.com) with incoming links from a large fraction of all pages on the Web. These pages end up having high reputation on almost every topic represented in the Web; it is not reasonable to identify a small set of highly-weighted topics for them."

A minor nod to the power of marketing, but they obviously don't understand the implications of marketing on the Internet.

"At the other extreme, there are pages with no more than a few incoming links; according to some estimates (e.g. [13]), a large number of pages fall in this category. Depending on where the incoming links of a page are coming from and the reputations of those links, they can have various effects on the reputation of a page according to our models. Our current implementation, however, may not report any strong reputations on any topic for these pages because all incoming links are simply weighted equally."

Unfortunately, they simply didn't take into consideration the fact that no new Web page has inbound links (unless it's anticipated, but such pages are rare). The database should have been diluted with pages that had no inbound links. Had they taken this simple step, they would have learned quickly why their approach won't work.

"Conclusions"

"We have introduced general notions of page reputation on a topic, combining the textual content and the link structure of the Web. Our notions of reputation are based on random walk models that generalize the pure link-based ranking methods developed earlier. For instance, our ranking based on the one-level weight propagation model becomes PageRank if the rank is computed with respect to all possible topics. We have presented algorithms for identifying the topics that a page has highest reputation on and for computing the reputation rank of a page on a topic. Our current work concentrates on refining the implementation of TOPIC to achieve more accurate rankings and better performance."

They are pursuing the wrong methodology. Their work will therefore never be finished. I have to ask how many years it will be before they realize the serious flaw in their approach?

When Brett invited me to visit this forum, I pointed out that I would be brutal. I am unforgiving to the people who bat out these Fermatian Proofs and expect the world to just sit around and applaud their brilliance. To overlook the most fundamental aspects of the Web community in an analysis of how to properly catalogue its member sites is inexcusable.

I hope other research on which search engine technology is based is better done than this travesty, but I've already shown how the PageRank research was also seriously flawed (and how Google doesn't work as advertised) in the search engine forums at JimWorld.

Brett, I'm still looking for answers, and I'll keep my eyes on your forums here. But if this is the way search engine technology is moving, I may just continue to rely upon directories and finding my own links the hard way. Obviously these guys are not equipped to do the job for us.


Brett_Tabke




msg:817860
 4:15 am on Aug 10, 2000 (gmt 0)

Hey MM, thanks for dropping by.

Your right, research publication and studying the makeup of the bow tie web is all the rage with the cap & gown set. The whole thing we were interested about in this thread, was the fact that so many of those docs mentioned were by current admins at the se's. Broder, Page, etal. That is good background on their line of thinking.

Some of them right or wrong? It really doesn't matter since they are going to do as they choose - we are just trying to figure out where it is all going.

Like that one on calculating term vectors, or figuring out how to determine a sites topic solely off a hand full of links. It can have an effect on how we optimize and run our sites. No, we are going to crack any algos reading term papers for a conference, but it does give us a slight road map. That one on topic distillation, is very informative stuff and probably 90% of the reason I shut down bl - it is good real-world applicable information.

Brett_Tabke




msg:817861
 4:19 am on Aug 10, 2000 (gmt 0)

Continued Here [webmasterworld.com]

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Marketing and Biz Dev / SEM Research Topics
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved