Im new to search engines, and have been looking through these forums and learning a lot...
But my question is:-
Do search engines use the same technologies to gather websites keywords, eg spiders etc... and how often do a new search methods/techniques arrive. So the web designer has to change tactics to gain a better position.
Interestingly enough, the basics have not changed in the last few years. Gimmicks have come and gone. Off page criteria has been added.
Technological changes happen for a variety of reasons from financial factors, competitors, the market in general, and new theories of search that are researched and implemented. No one can give you a real hard answer here, the best bet is to study the history of the SEs and get a feel from there.
Over the years, we've seen 4 basic versions of the internet based search engine:
- 95-96: The early Database engines: early simple text matching database search engines. Stripped out the pages of html and looked for matches. (WWW Worm, Excite, WebCrawler) - 96-98 Pure Net Play Engines : more advanced matching and adjustable algos based on html structure and word density. (Altavista, Infoseek, Excite) - 98- The Link Counters : they combined the best of the first two versions and added inbound link counting to the algos. (Google, Inktomi) - 2001- The Context Theme Engines : newer engines use "off the page" criteria and "context" such as directory listings, text of inbound links, and page context in addition to the first three formats. (Teoma, WiseNut)
So it has changed quite a bit over the years from the search side of things.
Spider side - pretty much the exact same today as it was years ago. The only change has been in crawler intellegence. They've gotten far better at knowing how to walk through a site and id'ing the important pages.