Content:
- Check site for duplicate pages
- What is dangerous about duplicate pages on the site
- Duplication of content on the site. Reasons
- How to find duplicate pages
- Conclusions
Check site for duplicate pages
Is your site moving too slowly? Constantly rollbacks to lower positions? And this is despite the fact that the inner and external optimization web resource completed at the highest level?
This happens for several reasons. The most common of them is the duplicate pages on the site, having different addresses and a complete or partial repetition of the content.
What is dangerous about duplicate pages on the site
The duplicate pages on the site make the text posted on them non-unique. In addition, confidence in such a web resource from search engines is reduced.
What else is the danger of duplicate pages on the site?
- Deterioration of indexation. If the web resource is quite voluminous and for some reason the content on the site is regularly duplicated (there are cases when each page has 4 - 6 takes), this negatively affects the indexation of search engines.
Firstly, due to the fact that search engine robots spend time indexing extra pages.
Secondly, search engines constantly search for page takes. When they are discovered, they underestimate the position of the web resource and increase the intervals between the calls of their robots on its pages.
- An erroneous definition of a relevant page. To date, search engine algorithms are trained to recognize duplication of content on a site that is indexed. But the choice of search robots does not always coincide with the opinion of the owner of the web resource.
As a result, the search results may not include the page that was planned to be promoted. In this case, the external reference mass can be configured on one page, and duplicate pages on the site will be issued.
As a result reference profile behavioral factors will fluctuate due to the distribution of visitors to unnecessary pages. In other words, there will be confusion, which is extremely will negatively affect the rating Your site.
- Loss of natural links. A visitor who liked the information from your web resource might want to recommend it to someone. And if he drew this information on the duplicate page, then the link he will distribute the wrong one.
Such valuable and sometimes expensive natural links will refer to duplicate pages on the site, which significantly reduces the effectiveness of promotion.
Duplication of content on the site. Reasons
Most often, duplicate pages on the site are created for one of the reasons:
- Not specified main mirror of the site. That is, the same page is available at different URLs - from www. and without.
- Auto generation web resource engine. This often happens when using new modern engines. Since they have some rules in their bodies that make duplicate pages on the site and place them under other addresses in their directories.
- Random webmaster errors, as a result of which there is duplication of content on the site. The result of such errors often becomes the appearance of several main pages with different addresses.
- Change the structure of the site, which entails assigning new addresses to old pages. In this case, copies of them with old addresses are saved.
How to find duplicate pages
One of the simple methods will help you check the site for duplicate pages:
- Analysis of data in search engine services for webmasters. Adding your web resource to the service Google webmaster, You get access to the data in the HTML Optimization section. In it, for duplicated metadata, you can find pages on which there is duplication of content.
B Yandex. Webmaster duplicate pages can be checked in the "Indexing" section > "Search View. On this page, sort “Deleted Pages” > “Dubles”.
- Analysis of indexed pages. To obtain their list, special search engine operators are used:
The resulting output will help you check the site for duplicate pages that will repeat the headlines and sub-opts.
3. Search for text fragments. For obtaining their list, familiar operators are used (site: - for Google and hosh: - for Yandex), after which we indicate the address of the site and in quotation marks a fragment of the text. As a result, we can get either full duplicate pages or partial content duplication.
4. With the help of special programs and services For example, using the Netpeak Spider program, you can define duplicate pages, text, meta tags and headers. All discovered duplicate will need to be deleted.
If you do not want to buy a Netpeak Spider desktop program, a multifunctional seo platform will help find duplicate pages Serpstat, which works online + there is a mobile version.
The service finds duplicate titles, descrips, H1 double titles, more than 1 titles on the page, more than 1 H1 title on the page.
Conclusions
It is advisable to carry out the above checks from time to time so that duplication of content on the site does not become an unexpected reason for the fall of its ratings. It should be forgotten that complete duplicate pages are not the only problem.
Duplicate H1, title, description, as well as some parts of the content like reviews and comments are also very undesirable.
We hope that this article was useful to you. Remember to share a link to her with those to whom she may also be interesting!