Have you at any time essential to reduce Google from indexing a distinct URL on your website web site and exhibiting it in their lookup motor benefits internet pages (SERPs)? If you handle world wide web web-sites very long plenty of, a working day will likely occur when you will need to know how to do this.
The three approaches most frequently applied to avoid the indexing of a URL by Google are as follows:
Using the rel=”nofollow” attribute on all anchor elements utilized to website link to the web page to protect against the back links from remaining adopted by the crawler.
Using a disallow directive in the site’s robots.txt file to prevent the website page from currently being crawled and indexed.
Utilizing the meta robots tag with the content material=”noindex” attribute to stop the web page from getting indexed.
Even though the variations in the 3 techniques show up to be refined at first look, the success can range drastically relying on which process you decide on.
Applying rel=”nofollow” to protect against Google indexing
Lots of inexperienced webmasters attempt to protect against Google from indexing a unique URL by making use of the rel=”nofollow” attribute on HTML anchor aspects. They increase the attribute to each anchor component on their web site used to hyperlink to that URL.
Which include a rel=”nofollow” attribute on a hyperlink helps prevent Google’s crawler from pursuing the backlink which, in convert, stops them from finding, crawling, and indexing the goal site. Although this system could possibly work as a brief-phrase alternative, it is not a practical extensive-expression answer.
The flaw with this method is that it assumes all inbound one-way links to the URL will include things like a rel=”nofollow” attribute. The webmaster, even so, has no way to prevent other website web sites from linking to the URL with a adopted hyperlink. So the possibilities that the URL will ultimately get crawled and indexed working with this system is rather significant.
Making use of robots.txt to stop Google indexing
Another widespread method utilised to reduce the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be included to the robots.txt file for the URL in problem. Google’s crawler will honor the directive which will reduce the web page from getting crawled and indexed. In google serp data , on the other hand, the URL can still surface in the SERPs.
In some cases Google will display a URL in their SERPs nevertheless they have under no circumstances indexed the contents of that web page. If adequate world wide web web pages url to the URL then Google can frequently infer the subject of the web site from the link text of people inbound back links. As a outcome they will display the URL in the SERPs for associated searches. Whilst working with a disallow directive in the robots.txt file will avoid Google from crawling and indexing a URL, it does not ensure that the URL will never ever show up in the SERPs.
Employing the meta robots tag to stop Google indexing
If you require to prevent Google from indexing a URL although also protecting against that URL from being exhibited in the SERPs then the most effective approach is to use a meta robots tag with a material=”noindex” attribute inside the head element of the website webpage. Of study course, for Google to in fact see this meta robots tag they have to have to to start with be in a position to discover and crawl the page, so do not block the URL with robots.txt. When Google crawls the web page and discovers the meta robots noindex tag, they will flag the URL so that it will in no way be proven in the SERPs. This is the most successful way to stop Google from indexing a URL and exhibiting it in their lookup final results.