Ever wonder how search engines like Google, Yahoo!, Ask.com, and Microsoft Live Search gather and organize their data so quickly? Behold the wonder of technology!

First, search engines need to gather the data. An automated process (known as spidering) constantly crawls the Internet, gathering Web-page data into servers. Google calls their spider the Googlebot; you could refer to it as a spider, robot, bot, or crawler, but it’s all the same thing. Whatever you call the process, it pulls in masses of raw data and does so continuously. This is why changes to your Web site might be seen within a day, or may take up to a few weeks to be reflected in search engine results.

In the second step, search engines have to index the data to make it usable. For each query performed by a user, the search engines apply an algorithm to decide which listings to display and in what order. The algorithms might be fairly simple or multi-layered and complex.

At industry conferences, Google representatives have said that their algorithm analyzes more than 200 variables to apply search ranking to a given query. You’re probably thinking, “What are their variables?” Google won’t say exactly, and that’s what makes SEO a challenge. But you can make educated guesses. (Same for Yahoo! and the others.)

So can you design a Web site that gets the attention of all the search engines, no matter which algorithm they use? The answer is yes, to an extent, but it’s a bit of an art. This is the nuts and bolts of SEO.