Robots.txt Generator

Create robots.txt with User-agent rules, Allow/Disallow paths, and Sitemap. Validate and test crawler behavior.

Preset templates

Advanced directives (optional)

Preferred domain for crawlers
Google ignores this. Some bots support it.
Yandex uses this to ignore tracking params

Crawler test simulator

See whether a path would be allowed or blocked by the generated robots.txt.

Blocked

Matched rule: Disallow: /admin

Valid
User-agent: *
Allow: /
Disallow: /admin
Disallow: /private

Sitemap: https://example.com/sitemap.xml

Frequently Asked Questions

What is robots.txt?

robots.txt is a file placed in your site root that tells search engine crawlers which URLs they can or cannot request. It uses directives like User-agent, Allow, and Disallow. It does not block access—it only suggests. Sensitive content should be protected by authentication, not robots.txt.

Where do I put robots.txt?

Place robots.txt at the root of your domain so it is available at https://yoursite.com/robots.txt. Search engines look for this exact URL. Our generator outputs the content; you save it as robots.txt and upload it to your server root.

What is the Sitemap directive?

The Sitemap line in robots.txt points crawlers to your XML sitemap URL. This helps search engines discover all your pages. You can list multiple Sitemap lines if you have several sitemaps. Use our XML Sitemap Generator to create the sitemap file.

Should I disallow admin or private folders?

Yes. Use Disallow for paths you do not want indexed (e.g. /admin, /private, /api). Never rely on robots.txt alone for security—use proper authentication and server-side access control for sensitive areas.

Can I have different rules for Googlebot vs Bingbot?

Yes. Add multiple User-agent blocks. Use User-agent: Googlebot for Google, User-agent: Bingbot for Bing, and User-agent: * for all other crawlers. Our tool lets you add and edit multiple user-agent blocks.