Content:

Get a free consultation from an expert on your project

Meta tag robots will help you find a common language with search robots 

Without even knowing why the meta-tag robots is needed, only based on its name, it is already possible to draw conclusions that it has something to do with the robots of search engines. And it really is.

The introduction of the meta-tag robots into the code of the web page makes it possible to indicate to the search bots your wishes regarding the indexing of its contents and the links located on it.

This can come in handy in many situations. For example, if there is duplication of content on the site or to prevent the transfer of page weight to links located on them.

Get a free consultation from an SEO expert on your site

How to take advantage of meta tag robots

The page to which the desired indexing conditions must be applied must contain the correctly executed meta-tag robots inside the < head > tag. 

Its structure is quite simple:

Структура мета-тега robots
 
In order for it to be correctly perceived by the boots of search engines, in this design the contents of the content attribute (‘’xxxxxxxxxxh’ ’) must consist of one or more (after a comma) standard directives:

  1. index / noindex - indicates the need to index / ignore the contents of the page.
     
  2. follow / nofollow - analyze / ignore links within the web page.
     
  3. all / none - index / ignore the page completely.
     
  4. noimageindex - prohibition of indexing of images present on the page.
     
  5. noarchive - a ban on the output in the search results of the “Saved Copy” link, which makes it possible to view a copy of the page stored in the search engine cache (even if it is temporarily unavailable or deleted from the site).
     
  6. nosnippet - a ban on the conclusion in the search issue under the name of the page of a fragment of text (snippet) describing its content.
     
  7. noodp - A directive that tells Google to ban the use of the page as a snippet, a description from the Open Directory Project (aka DMOZ). 

Features of the use of meta tag robots

Some of the combination of directives supported by this meta-category are interchangeable (identical). For example, if you need to prohibit indexing of the contents of the page and all links to it, you can use ‘’ noindex, nofollow ’’ or the ‘’ none’’ directive in the meta tag.

Варианты использования директив в мета-теге robots
 
In the opposite case, when you need to index everything (in the content parameter of the meta-tag robots - ‘’index, follow’ ’or‘ ’all’ ’), the third option also appears - do not implement this tag in the page code at all.

Мета-тег robots, что разрешает полную индексацию
 
There are also private cases in which instructions regarding indexing should be reported only to the robot of any one search engine. To do this, instead of ‘’robots’ ’indicate the name of the bot, which is related to the directives contained in the meta-tag. For example, if Google should add the contents of the page to its index, but do not analyze the links to it:

Отдельные указания для GoogleBot
 
It is important that the content content content contains no repetition or presence of conflicting directives, since in this case the meta tag can be ignored by the search bot.

Another point on the basis of which webmasters argue quite often is the register in which the contents of the meta tag are prescribed. Some believe that it is right to use only capital, others - only lowercase ones. But in fact, both options are acceptable, since the meta-tag is insensitive to the register.

Why do I need a meta tag robots if there is a robot.txt file?

Yes, really at first glance it may seem that the use of this meta-tag provides the same features as setting up the robots.txt file. But there are still several differences. And they may well be reasons to prefer the use of meta-tag:

  1. Meta tag robots is used to finely adjust indexing - you can close the content, but leave the links open (in the content parameter of the meta-tag robots - ‘’noindex, follow’ ’) and vice versa. Robots.txt does not have such an opportunity.

    Содержимое не индексировать, а контент да
     
  2. In situations where it is not possible to access the root directory of the website, editing robots.txt is not possible. That's it then comes to the aid of the meta-tag of the same name. 
     
  3. In robots.txt you can close the whole catalog from indexing, to prohibit bots from accessing all the pages contained in it, while the meta tag will have to be used for each of them. It turns out that in this case it is more convenient to make settings in a file. But if some pages inside the catalog still need to be left open, it is more convenient to use a meta tag.

To manage indexing of website pages, it is permissible to simultaneously use the meta-tag robots and the robot.txt file. They may be responsible for indicating search bots about different web pages or duplicating each other's commands. 

But if they contain conflicting directives about the same pages, search engines will not always make the right decision - by default a more rigorous instruction is chosen. It turns out that the pages (or links to them), about which there are disagreements between robots.txt and the meta-category robots, will not be indexed.

Website index management is a very useful tool for SEO promotion.  The main thing is to learn how to correctly determine in which situation it is more efficient to use one or another of the methods now known to you.