A spider is a robotic program that downloads webpages. It works just as your browser does when you connect to a web site and download a page. The spider just doesn't have any visual components. You can see the same thing by viewing any webpage, and then selecting "view source" in your browser.
As a spider downloads pages, it can strip apart the page and look for "links". It is the crawlers job to then decide where the spider should go to next based on the links, or based upon a preprogrammed list of urls.
An indexer rips apart a page into it's various components and analyze them. Entities such as, titles, headings, links, text, constructs, bold, italic, and other style portions of a page are ripped apart and analyzed.
The database is the storage medium for all the data a search engine downloads and analyzes. This can require huge amounts of storage space.
Search Engine Results Engine:
Ah, the heart of the beast. It is the results engine's job to decide what pages matches a users search. This is the portion of a search engine you interact with when you perform a search. It is also the one part we are concerned with here.
When a user types in a keyword and does a search, the search engine decides what to match for results under varying criteria. The means with which it decides is called an algorithm. You may hear search engine optimization (SEO) professionals discuss "algos" from time to time and this is what they are referring too.
Although search engines have changed a great deal, most still match results to searches similar to the following: