homepage Welcome to WebmasterWorld Guest from 54.145.183.169
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 105 message thread spans 4 pages: < < 105 ( 1 [2] 3 4 > >     
Google, time to define "RELEVANT" or "RELATED" linking and reciprocal linking
what's the idea of MC and GG about what it is relevant & related in linking
giuliorapetti

5+ Year Member



 
Msg#: 34403 posted 10:37 am on May 19, 2006 (gmt 0)

Hi guys,

I'm a bit worried about the semplicistic concept of "relevant" or "related" content used by MC when he talks about linking and reciprocal linking.

I explain what I mean with an example: we are a hotel reservation website and we deal with hotels in various
destinations of the world.

Our "related resources" are the ones that would be _USEFUL_ for a traveller.

As the traveller will book the hotel with us, the rest of the resources are "complementary" resources and not
comtetitive resources.

Example of what we link and what our travellers want us to link (as these are useful things to know if you have already booked or about to book an hotel):

- Car rentals
- Airport transfer services
- Bicycle Rentals
- Art Galleries
- Cinemas
- Museums
- Theaters
- Bars
- Food Festivals
- Restaurants
- Casinos (Yes, if you book an hotel in Las Vegas, you want to know the best casinos if you don't have one inside your hotel)
- Clubs and Discos
- Festivals & events
- Nightclubs

I also have another 195 categories of resources that we regularly link in order to build a good service for our
hotel-bookers.

As you see, these are all hotel and travel related resources, that makes our websites very visited and one-way-linked just because these are useful info for a traveller than wants to book an hotel and know more about the area.

NOW: I'm worried about what MC says in his blog and about the use and definition that all the SEO world has done about "relevant/related" content.

It should be natural that a website will link COMPLEMENTARY resources, not COMPETITORS. Therefore, the keywords to be inspected on our outgoing links are 100% different from what we sell.

Therefore, I'm deeply worried about the concept of "related" that Google will or is applying in evaluating what type of links you have on your pages.

MC says:

"another real estate site......I checked out the site. Aha, Poor quality links...mortgages sites...."

Now: is MC aware that mortgages sites are natural and relevant and pertinent to be linked if you are a real estate agent, as you might want to give related services to your visitors telling them how to find the money to buy his services?

Or does MC search for the related content in terms of a semplicistic "real estate words are good, anythign else is bad"? I mean: is Google even thinking about the fact that a real estate site cannot link a competitor but will be more likely to link complementary services?

In short: does Google and MC want us (a hotel reservation service) link Hotels.com as it will be relevant (and a complete nonsense as they are our competitors) or is googe "mapping" the related (complementary) services for every industry?

I doubt that Google will have a map of every complementary service for any given industry: therefore, I'm afraid that "related" for MC means "same topic, same industry... competitors, essentially".

Will MC want Expedia to link Orbits, in order to evaluate Expedia's lik as relevant?

Or will MC and Google better evaluating (or not "worse evaluating" at least) Hotels.com linking Avis or Budget?

Thanks

Giulio

[edited by: engine at 2:20 pm (utc) on May 22, 2006]
[edit reason] formatting [/edit]

 

pageoneresults

WebmasterWorld Senior Member pageoneresults us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 34403 posted 1:45 am on May 22, 2006 (gmt 0)

Hmmm, maybe we should define the various types of links within a website before determining that all links fall under this criteria?

mattg3

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 34403 posted 2:51 am on May 22, 2006 (gmt 0)

Let's try to find logic relevance in a mega popular site like myspace .. ;)

Artificial puberty is gonna be the next big milestone after artificial intelligence ..

Spock, Data and now the search engine algorithms trying to understand what is human .. :)

logic!= relevance

glengara

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 34403 posted 8:57 am on May 22, 2006 (gmt 0)

I've always thought links that are topical/relevant to the page content is a pretty failsafe approach....

dataguy

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 34403 posted 2:25 pm on May 22, 2006 (gmt 0)

If you want to see what Google considers "RELATED", do a search and click on the link that says "Similar pages". If this doesn't scare you, I don't know what will...

mattg3

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 34403 posted 2:39 pm on May 22, 2006 (gmt 0)

I've always thought links that are topical/relevant to the page content is a pretty failsafe approach....

In a 100% logic world ... pityfully people and therefore the web aren't 100% logic .. :)

websoccermom

5+ Year Member



 
Msg#: 34403 posted 2:40 pm on May 22, 2006 (gmt 0)

I've always thought links that are topical/relevant to the page content is a pretty failsafe approach....

I feel that same way, but the problem is not what do we as webmasters and/or searchers feel is relevent it is what does Google feel is relevent?

Lets say I design a page for a soccer club. On the front page we would like to thank a few of our corporate sponsors with links. These sponsors which could be varied, have nothing to do with soccer, Thereby, it sounds as if having these non related links may prevent my pages from being crawled as deeply/as much. I don't think this make sense. I don't expect the links to count or rank my site for any of the non related keywords or even related ones, but according to MC's post having these links in the new algo may prevent my page from being crawled as much, or worse have you de indexed.

pageoneresults

WebmasterWorld Senior Member pageoneresults us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 34403 posted 2:59 pm on May 22, 2006 (gmt 0)

Lets say I design a page for a soccer club. On the front page we would like to thank a few of our corporate sponsors with links.

That would be natural, don't you think?

These sponsors which could be varied, have nothing to do with soccer.

That too would be natural.

Thereby, it sounds as if having these non related links may prevent my pages from being crawled as deeply/as much.

I think we may be reading too much into this. Remember, links are just one part of the equation. If you are involved in link exchanges or the types of links that Matt Cutts is referring to, there are most likely going to be other signals. Providing links to sponsors of a Soccer Club is only natural. I would assume that those will be links to quality resources? If they aren't, then there may or may not be issues. It's all going to be relative to all of the other factors that would be used in determining the quality of the page. Worse case scenario, they just don't have any value from an indexing standpoint. If you get too many of them, then there may be some issues.

glengara

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 34403 posted 3:23 pm on May 22, 2006 (gmt 0)

*It's all going to be relative to all of the other factors that would be used in determining the quality of the page.*

And it's the message that the overall linkage pattern conveys that counts, IMO.

Kufu

5+ Year Member



 
Msg#: 34403 posted 3:24 pm on May 22, 2006 (gmt 0)

If you want to see what Google considers "RELATED", do a search and click on the link that says "Similar pages". If this doesn't scare you, I don't know what will...

The similar link is useless as the only similarity between the sites is that they are linked to from the same page somewhere. For example on my web design site, I have a list of sites that I have worked on (and link to), my portfolio page. Those sites will be considered similar, but in reality they have almost nothing to do with one another.

EdmondDantes

10+ Year Member



 
Msg#: 34403 posted 4:38 pm on May 22, 2006 (gmt 0)

I think that the relevance of links could be described in the following way.

To use your example, If you have a website that lets users book hotel rooms. Some users may want to see what casinos are available in the resort. Now there are two approaches:

1. Provide a link to a site that list and reviews casinos lets call this the relevant way.

2. Provide 100 links to 100 random Casinos who happen to be willing to link exchange let call this the web spam way.

If I were Google I know which approach I would feel was providing relevant and related links.

Just my 2 cents

idolw

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 34403 posted 4:58 pm on May 22, 2006 (gmt 0)

the biggest G's problem is they try to force us to build and manage our websites according to rules they set.
sorry guys, but I will still be linking to whatever I find interesting and useful. how can a robot assess?

why won't google just state that every site that is in their index must use google sitemaps and analytics and follow users' habbits?
they could finally be able to stop playing silly games with webmasters.

julinho

10+ Year Member



 
Msg#: 34403 posted 4:59 pm on May 22, 2006 (gmt 0)

I think Google has a truckload of info we don't have, to assess relevance (and the other ranking factors).

They serve maybe one billion searches a day, and have been accumulating data for years; add to it the fact that they can track the behavior of a significant amount of users.

How many searches have the keywords [widget] and [gadget] in the same string? or in successive search strings? How many people search for [widget] but end up spending time in a site in the [gadget] neighbourhood?
Their algorithm may draw some conclusion, which will be as more reliable as the number of pages/clicks/topics/etc grows.

Only Google have access to this kind of information, and only them know how they use it.
Make your site to be useful and informative, and you don't have to care about relevance (only worry about the other 99 factors).

simonmc

5+ Year Member



 
Msg#: 34403 posted 5:03 pm on May 22, 2006 (gmt 0)

Receiprocal linking pre-dates the birth of google. This is how the web used to work. Until google came along and spoiled it that is.

If google continues to make you jump through hoops to rank at google then it would not suprise me to see your site get broken and not work in other search engines.

It is not normal for the search engines to dictate to web masters how they should build their sites.

Web masters build sites, search engines index them. Simple. Google seems to have lost sight of that.

pageoneresults

WebmasterWorld Senior Member pageoneresults us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 34403 posted 5:09 pm on May 22, 2006 (gmt 0)

For those of you who don't mind watching paint dry, this is an excellent read and will give you some ideas as to what may determine link relevancy.

Information retrieval based on historical data [appft1.uspto.gov]

[0066]Link-Based Criteria

[0067] According to an implementation consistent with the principles of the invention, one or more link-based factors may be used to generate (or alter) a score associated with a document. In one implementation, the link-based factors may relate to the dates that new links appear to a document and that existing links disappear. The appearance date of a link may be the first date that search engine 125 finds the link or the date of the document that contains the link (e.g., the date that the document was found with the link or the date that it was last updated). The disappearance date of a link may be the first date that the document containing the link either dropped the link or disappeared itself.

[0068] These dates may be determined by search engine 125 during a crawl or index update operation. Using this date as a reference, search engine 125 may then monitor the time-varying behavior of links to the document, such as when links appear or disappear, the rate at which links appear or disappear over time, how many links appear or disappear during a given time period, whether there is trend toward appearance of new links versus disappearance of existing links to the document, etc.

[0069] Using the time-varying behavior of links to (and/or from) a document, search engine 125 may score the document accordingly. For example, a downward trend in the number or rate of new links (e.g., based on a comparison of the number or rate of new links in a recent time period versus an older time period) over time could signal to search engine 125 that a document is stale, in which case search engine 125 may decrease the document's score. Conversely, an upward trend may signal a "fresh" document (e.g., a document whose content is fresh--recently created or updated) that might be considered more relevant, depending on the particular situation and implementation.

[0070] By analyzing the change in the number or rate of increase/decrease of back links to a document (or page) over time, search engine 125 may derive a valuable signal of how fresh the document is. For example, if such analysis is reflected by a curve that is dropping off, this may signal that the document may be stale (e.g., no longer updated, diminished in importance, superceded by another document, etc.).

[0071] According to one implementation, the analysis may depend on the number of new links to a document. For example, search engine 125 may monitor the number of new links to a document in the last n days compared to the number of new links since the document was first found. Alternatively, search engine 125 may determine the oldest age of the most recent y % of links compared to the age of the first link found.

[0072] For the purpose of illustration, consider y=10 and two documents (web sites in this example) that were both first found 100 days ago. For the first site, 10% of the links were found less than 10 days ago, while for the second site 0% of the links were found less than 10 days ago (in other words, they were all found earlier). In this case, the metric results in 0.1 for site A and 0 for site B. The metric may be scaled appropriately. In another exemplary implementation, the metric may be modified by performing a relatively more detailed analysis of the distribution of link dates. For example, models may be built that predict if a particular distribution signifies a particular type of site (e.g., a site that is no longer updated, increasing or decreasing in popularity, superceded, etc.).

[0073] According to another implementation, the analysis may depend on weights assigned to the links. In this case, each link may be weighted by a function that increases with the freshness of the link. The freshness of a link may be determined by the date of appearance/change of the link, the date of appearance/change of anchor text associated with the link, date of appearance/change of the document containing the link. The date of appearance/change of the document containing a link may be a better indicator of the freshness of the link based on the theory that a good link may go unchanged when a document gets updated if it is still relevant and good. In order to not update every link's freshness from a minor edit of a tiny unrelated part of a document, each updated document may be tested for significant changes (e.g., changes to a large portion of the document or changes to many different portions of the document) and a link's freshness may be updated (or not updated) accordingly.

[0074] Links may be weighted in other ways. For example, links may be weighted based on how much the documents containing the links are trusted (e.g., government documents can be given high trust). Links may also, or alternatively, be weighted based on how authoritative the documents containing the links are (e.g., authoritative documents may be determined in a manner similar to that described in U.S. Pat. No. 6,285,999). Links may also, or alternatively, be weighted based on the freshness of the documents containing the links using some other features to establish freshness (e.g., a document that is updated frequently (e.g., the Yahoo home page) suddenly drops a link to a document).

[0075] Search engine 125 may raise or lower the score of a document to which there are links as a function of the sum of the weights of the links pointing to it. This technique may be employed recursively. For example, assume that a document S is 2 years olds. Document S may be considered fresh if n % of the links to S are fresh or if the documents containing forward links to S are considered fresh. The latter can be checked by using the creation date of the document and applying this technique recursively.

[0076] According to yet another technique, the analysis may depend on an age distribution associated with the links pointing to a document. In other words, the dates that the links to a document were created may be determined and input to a function that determines the age distribution. It may be assumed that the age distribution of a stale document will be very different from the age distribution of a fresh document. Search engine 125 may then score documents based, at least in part, on the age distributions associated with the documents.

[0077] The dates that links appear can also be used to detect "spam," where owners of documents or their colleagues create links to their own document for the purpose of boosting the score assigned by a search engine. A typical, "legitimate" document attracts back links slowly. A large spike in the quantity of back links may signal a topical phenomenon (e.g., the CDC web site may develop many links quickly after an outbreak, such as SARS), or signal attempts to spam a search engine (to obtain a higher ranking and, thus, better placement in search results) by exchanging links, purchasing links, or gaining links from documents without editorial discretion on making links. Examples of documents that give links without editorial discretion include guest books, referrer logs, and "free for all" pages that let anyone add a link to a document.

[0078] According to a further implementation, the analysis may depend on the date that links disappear. The disappearance of many links can mean that the document to which these links point is stale (e.g., no longer being updated or has been superseded by another document). For example, search engine 125 may monitor the date at which one or more links to a document disappear, the number of links that disappear in a given window of time, or some other time-varying decrease in the number of links (or links/updates to the documents containing such links) to a document to identify documents that may be considered stale. Once a document has been determined to be stale, the links contained in that document may be discounted or ignored by search engine 125 when determining scores for documents pointed to by the links.

[0079] According to another implementation, the analysis may depend, not only on the age of the links to a document, but also on the dynamic-ness of the links. As such, search engine 125 may weight documents that have a different featured link each day, despite having a very fresh link, differently (e.g., lower) than documents that are consistently updated and consistently link to a given target document. In one exemplary implementation, search engine 125 may generate a score for a document based on the scores of the documents with links to the document for all versions of the documents within a window of time. Another version of this may factor a discount/decay into the integration based on the major update times of the document.

[0080] In summary, search engine 125 may generate (or alter) a score associated with a document based, at least in part, on one or more link-based factors.

The above information is relative to links. There is so much more in that document that after reading it, I'm sure there will be bright lights beaming from around the world. ;)

Not only that, but it gives you some insight as to how Google and other search engines may be determining the true quality of a link and the document containing it.

[edited by: pageoneresults at 5:21 pm (utc) on May 22, 2006]

decaff

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 34403 posted 5:09 pm on May 22, 2006 (gmt 0)

I would say that Google has no choice (with their current revenue model)...to set "relevancy" thresholds at levels that continue to grow their PPC revenue...plain and simple!

The day of the natural SERPs being "pure as the driven white snow" are way behind us...now expect the SERPs to resemble the muddied waters of the Colorado River...and when things get rough...you just fell out of the relative safety of a large excursion raft into a class 5 rapid....

beren

10+ Year Member



 
Msg#: 34403 posted 5:15 pm on May 22, 2006 (gmt 0)

I don't get the anti-Google sentiment around here with regard to related and relevant linking. It seems to be a good move for them to attempt to improve their search results in the face of spam. Anything to lessen the influence of paid links is a good thing.

There was an astonishing entry at Matt Cutt's blog recently (dated May 16). He showed sites from people at webmasterworld.com who were complaining. At the bottom of the site pages were a bunch of completely irrelevant links. These were obviously given in exchange for money or other links. It was horrible, and no reasonable search engine user would criticize Google for trying to counteract this spam. I was amazed that someone would run a site like that and then complain on webmasterworld.com about Google, and it really made me lose respect for this place.

pageoneresults

WebmasterWorld Senior Member pageoneresults us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 34403 posted 5:20 pm on May 22, 2006 (gmt 0)

I'd like to point out item #0077 from above...

[0077] The dates that links appear can also be used to detect "spam," where owners of documents or their colleagues create links to their own document for the purpose of boosting the score assigned by a search engine. A typical, "legitimate" document attracts back links slowly. A large spike in the quantity of back links may signal a topical phenomenon (e.g., the CDC web site may develop many links quickly after an outbreak, such as SARS), or signal attempts to spam a search engine (to obtain a higher ranking and, thus, better placement in search results) by exchanging links, purchasing links, or gaining links from documents without editorial discretion on making links. Examples of documents that give links without editorial discretion include guest books, referrer logs, and "free for all" pages that let anyone add a link to a document.

mattg3

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 34403 posted 5:39 pm on May 22, 2006 (gmt 0)

Not only that, but it gives you some insight as to how Google and other search engines may be determining the true quality of a link and the document containing it.

There are many things one can define or propose as relevant, there are many statistical/mathematical techniques out there. But modelling life is both interesting and error prone.

Modelling the global behaviour of an entire species logically and emotionally, I don't think so. You can of course do some things. But again on 1 billion pages an error of 0.01 is 10 million wrong detections.

Strong signals are always easy to detect, the middle ground is where it gets complicated.

You can always make a model. If it is the right model is the next question.

My main concern is that the possible logical and emotional information contained in billion of pages is too much information to both model correctly and fairly, even given a huge amount of training data. This is simply because the small team doing the modelling, as clever as they may be, unlikely have even in combination of their skills the ability to assess human behaviour with an error small enough to be useful on the long run. The more pages, the more webmasters, posters, wiki contributors the smaller your error has to become to reach equal efficiency as before.

So of you have 1 million pages the error might be neglectable, but the bigger the web gets the better and more precise your model has to be to maintain your previous standard.

To actually improve with more and more pages you will have to improve your model at a rate faster than the web is growing.

ogletree

WebmasterWorld Senior Member ogletree us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 34403 posted 5:48 pm on May 22, 2006 (gmt 0)

I see too many sites that rank well with only recips and paid links. The ones I see ranking do this only with on topic links. One guy I am trying to beat right now has a directory on his site set up for recip linking. He ranks for 2 word local phrase "city profession". He also has a link from a bunch of directories that cost money to be in.

annej

WebmasterWorld Senior Member annej us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 34403 posted 5:58 pm on May 22, 2006 (gmt 0)

No doubt there are a lot of stale links to my history site. I have a lot of stale articles too. It's a stale, stale topic.

DoingItWell

10+ Year Member



 
Msg#: 34403 posted 6:16 pm on May 22, 2006 (gmt 0)

The way I read this, you should put your focus on new pages rather than maintaining existing pages, because the new links are the important ones to determine a site's position in a lifecycle?

I have several travel related websites and can only agree that it is very hard to figure out or find a limit to what is relevant, and I'm seriously worried about how the SE's determine what's what.

pageoneresults

WebmasterWorld Senior Member pageoneresults us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 34403 posted 6:20 pm on May 22, 2006 (gmt 0)

No doubt there are a lot of stale links to my history site. I have a lot of stale articles too. It's a stale, stale topic.

Stale is just one part of the equation. Just because the document doesn't change, doesn't mean that it is being devalued. It could be the exact opposite. The document may have obtained authority status and stale isn't part of the equation.

[0068] These dates may be determined by search engine 125 during a crawl or index update operation. Using this date as a reference, search engine 125 may then monitor the time-varying behavior of links to the document, such as when links appear or disappear, the rate at which links appear or disappear over time, how many links appear or disappear during a given time period, whether there is trend toward appearance of new links versus disappearance of existing links to the document, etc.

Links to documents that are stale would tend to change. Links to documents that are informational such as tutorials, historical, etc. would fall under a different category. Those types of links are more permanent.

babsie1

10+ Year Member



 
Msg#: 34403 posted 6:51 pm on May 22, 2006 (gmt 0)

"put your focus on new pages rather than maintaining existing pages"

One must do both since existing pages could be "expert" or "professional" pages, as pageoneresults points out.

Go after links -- good links -- every month for these new pages and some for the old pages.

It's a slow burn, folks. A slow consistent burn that those of us who have been in writing and marketing and sales for yeeeears know all about.

Be original. Be consistent. Be there. Show up.

You will succeed.

[Two years ago, #12 for key words in Google, then #7 for key words, then #5 for a year, now #4. A slow but sure climb BUT traffic has tripled 'cause I do more than worry about googlebot...]

pageoneresults

WebmasterWorld Senior Member pageoneresults us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 34403 posted 7:02 pm on May 22, 2006 (gmt 0)

The way I read this, you should put your focus on new pages rather than maintaining existing pages.

I read it a bit differently. While you should always focus on new content, improving and expanding existing content is also of great benefit. And then you have content that is informational and tutorial in nature that doesn't change much over time. How that document is linked to from within your own site and via IBLs (Inbound Links), will determine the quality (relevancy) of the page.

Because the new links are the important ones to determine a site's position in a lifecycle?

I think it is the older links that really determine a site's position in a lifecycle. If a site has a short lifecycle, which many do, then it's an all out linkfest which works. But, how long it works is becoming an issue for many.

Sites that are here for the long term, need not worry about the whole link craze thing. Just research a few quality links and let it go from there. Time is the determining factor in all of this.

dudester

5+ Year Member



 
Msg#: 34403 posted 7:44 pm on May 22, 2006 (gmt 0)

echoing the comment of a previous user. linking behaviour can be simulated and misinterpreted by se's and users alike. therefore, it will not produce accuracy in serp's. getting behavioural patterns of the users (bookmarking patterns, pageviews et al)would increase the relevancy of serp's in combination with reduced emphasis on linking. linking is an outdated concept. granted, seo industry will be facing a pressure to adapt to behavioural measurements in a way, since currently most of its money commes from linking. no?

crobb305

WebmasterWorld Senior Member crobb305 us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 34403 posted 8:05 pm on May 22, 2006 (gmt 0)

By telling people how to link they are distorting "natural linking". Ideally (IMHO), pages rank well because the person who created them knows nothing about SEO, and simply created pages that have good content, not because they strategically determined which outbound links are "good" in Google's eyes.

Now, we have people inside Google preaching about how to link. It seems now, we have to validate the "quality" of the pages we link to from an SEO standpoint, rather than linking to sites we deem to be "quality" from an informational perspective.

goubarev

5+ Year Member



 
Msg#: 34403 posted 8:44 pm on May 22, 2006 (gmt 0)

I agree with simonmc.
Who is google to tell me how to build my site?!
I was linking to the sites way before google was even around. Now they come around and tell me that the sites I'm linking to are bad? Well, how come thousands of my returning customers don't think so? How come MSN and Yahoo don't think so?

The google has managed to screw up the whole internet by "ranking sites by incoming links" - this is the wrong concept from the start - the right concept would be to rank sites by user behavior. No matter how much they "twick" their algo, no matter how much electricity they waist trying to compute "relavant" - it isn't going to work...

steveb

WebmasterWorld Senior Member steveb us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 34403 posted 8:49 pm on May 22, 2006 (gmt 0)

Some folks want more feedback from Google.

Some folks don't want Google to dictate how they build their sites.

These folks don't go to the same restaurant. I'm perfectly happy with Google saying exactly why they will or will not rank things. I'll then do whatever I want.

trinorthlighting

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 34403 posted 8:54 pm on May 22, 2006 (gmt 0)

I do not think google should even judge on links...... Content! KISS (Keep it simple stupid) should apply.....

zuses

5+ Year Member



 
Msg#: 34403 posted 9:00 pm on May 22, 2006 (gmt 0)

As far as I'm not seo professional just the editor of non-profit site I can't get why linking to your own site from blogs & conferences is treated like not human & searches-oriented. For example: I got the information about architectural exibition & published it in my site. Later some visitors republish a part of this information in their blogs etc, linking to my site. I'm sure they no nothing about search engines, ranking & so on. They only wanna other professionals to know about that exibition. So now I have to prevent this linking because of Google? I can't do this & to tell the truth I don't want to. I wanna share the information -wasn't internet created to do this job?
P.S. Sorry for my awful English, but as Tolstoy wrote can't keep silent:)

pageoneresults

WebmasterWorld Senior Member pageoneresults us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 34403 posted 9:03 pm on May 22, 2006 (gmt 0)

Who is google to tell me how to build my site?!

Google is not telling you how to build or promote your site. They are offering suggestions. If you don't want the estimated 45%+ market share that Google have to find your site, that is your choice. ;)

We're not in the pre-Google days anymore. Nor are we dealing with an index of 500 million pages. So far, Google appears to have done the best with what they have to work with (8+ billion pages). Until someone comes out with something better, what can we do? Get natural traffic? Sure, but that takes time which many don't appear to have. We're no longer in those InfoSeek/AltaVista days when you could launch a new site and be raking in the bucks within a week. It's a long term proposition. For most of us anyway. ;)

And, if you need results right now, that's what PPC is for. I know many don't have the budget, but the bottom line is that it is the only alternative for a short term strategy while the site is getting seated in the index. During that seating in the index, the natural and relevant links will begin to develop.

This 105 message thread spans 4 pages: < < 105 ( 1 [2] 3 4 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved