robot help: SpiderMonkey robots.txt FetcherCheck Your robots.txt File

robot help: SpiderMonkey robots.txt Fetcher It is important that every web site have a robots.txt file in the root directory to avoid the numerous 404 errors and to make the site more "robot-friendly". To manage a robot's crawl of your site, you can use this simple file (robots.txt) in the top-level domain (i.e.: as an adjunct to properly written META tags.

# EXAMPLE robots.txt
User-agent: *# Enter specific user-agent but "*" is best.
Disallow: /cgi-bin/
Disallow: /images/
# EXAMPLE robots.txt to exclude a single robot
User-agent: Bad-Bot-From-Hades
Disallow: /

So what does yours look like?

This SpiderMonkey resource will fetch and examine your web site's robots.txt, or let you know it is missing.
Please enter your domain name:

Try SpiderMonkey's robots.txt Generator.