- Back to Home »
- 301 redirect duplicate content , duplicate content , duplicate content fixes , duplicate content solutions , Google Panda and duplicate content , Google Panda update , ways to resolve duplicate content »
- Ways to Resolve Duplicate Content
Posted by : Unknown
Thursday, February 26, 2015
With the continuing updates of Google, duplicate
content has become a huge topic in the search engine optimization (SEO) world. Duplicate content is a content that appears on the Internet in more than one place (that
is, in more than one URL), giving the search engines the difficulty to decide
which version is more relevant to a particular search query. Such dilemma will
lead the search engines to rarely show multiple duplicate pieces of content and,
are forced to choose which version is most likely to be the original.
For site owners, duplicate content is an issue that
needs to be fixed in order to improve or maintain site rankings and to prevent
traffic losses as well. Here are some ways to resolve duplicate content:
301 redirect.
For this, you have to check the page authority and see if one page has a higher
PA than the other. There are various tools for that such as the Open Site
Explorer. After that, set up a 301 redirect from the duplicate page to the
original page. In this way, they no longer compete with one another in the
search results.
rel=canonical.
A rel=canonical tag is a special tag that is inserted into the header of HTML
that helps communicate to search engine bots the relationship of that piece of
content to others on your site. It tells the search engine bots which pieces of
content are original or primary ones and which are duplicates. So the bot will
pass over the duplicates and only index and give link credit to the primary
piece. Add this tag to the HTML head of
a web page to tell search engines that it should be treated as a copy of the
"canon," or original, page:
noindex,
follow. Basically, this code snip tells the search engine robots to not
record the information on the page (the noindex part) but to still relate this
page to the pages that link out of it (the follow part). You can add the
"noindex, follow" to the meta robots tag to tell search engines not
to include the duplicate pages in their indexes, but to crawl their links. Here's
what it should look like:


