Robots.txt is a file which instructs search engine crawlers, spiders and robots which part of the site to be indexed and which part of the site not be indexed.
Basically you don’t need a robots.txt file unless you have content that you don’t want Google or any other search engines to index. Don’t create a robots.txt file if you like search engines to crawl your entire website.
Using robots.txt file you can easily eliminate a URL from being indexed in search engine search results. Below are some helpful commands that can used to Disallow files and folders on your website in your robots.txt file.
Block your entire website from crawling:
Block particular files from crawling:
Block particular folders from crawling:
You can easily create a robots.txt file following the below steps,
1. Just open any text editor preferably notepad.
2. Place the required code following the above examples.
3. Save the file as robots.txt and upload it to your root folder of your server basically it will public_html
4. If your Disallowed files or folder is already crawled by search engines then it will take few days to disappear from search results.
5. You can alternatively use online tools available to generate robots.txt file.