search engines consist of five discrete software components: 1. Spider : a robotic browser like program that downloads webpages. 2. Crawler : a wandering spider that automatically follows links found on pages. 3. Indexer : a blender like program that dissects webpages that are downloaded by spiders. 4. The Database : a warehouse of the pages downloaded and processed. 5. search engine Results engine : digs search results out of the database
I would think it is the Indexer that handles the semantic side of things?