A ‘spider’ is another term for a search engine robot or ‘bot,’ which is an electronic device developed by the search engines to travel the World Wide Web searching for web content to access and scan. The scanned content is then indexed by the search engine according to its proprietary mathematical algorithm in an effort to make it easy to retrieve in the future.
A big part of search engine optimization is preparing web pages to be search-engine-friendly, meaning the search ‘spiders’ can easily find, access and scan the web content. Websites that have been poorly architected with bad linking structure, inbound links and internal between pages, can impede the search engine ‘spider’ and even deny it access to the web content altogether.
Because search ‘spiders’ access web content by following links, if the linking structure is not designed properly a search engine ‘spider’ can become trapped in a sort of “web” or “eddy” where it travels round and round from link to link without ever being able to access the web content to scan it. If your web content is not scanned it will not be indexed by the search engine and it will not appear in the search engine results pages (SERPs).
For those of you who have secure web content that is not meant for public consumption, there are robot exclusion options that allow you to instruct the search engine ‘spider’ to not attempt to access certain web pages. Search engine optimization prepares your web pages, including those with secure content, so the search engine ‘spider’ knows which pages to access and is able to do so.
Strategically planning for the ultimate goal of being found by your target audience groups who are ready, willing and able to convert (take the desired action at your website), search engine optimization also involves preparing your content as well as your site architecture to ensure your web pages are easily indexed and accurately matched to searches for future retrieval.