Search Engine Protocols

SEO’s tend to use plenty of tools. a number of the foremost helpful ar provided by the search engines themselves. Search engines need webmasters to form sites and content in accessible ways that, in order that they give a spread of tools, analytics and steering. These free resources give information points and opportunities for exchanging data with the engines that aren’t provided anyplace else.

1. Sitemaps
Think of a sitemap as a listing of files that offer hints to the search engines on however they will crawl your web site. Sitemaps facilitate search engines notice and classify content on your web site that they\’ll not have found on their own. Sitemaps additionally are available in a spread of formats and might highlight many various sorts of content, together with video, images, news and mobile.
You can browse the total details of the protocols at Sitemaps.org. additionally, you’ll be able to build your own sitemaps at XML-Sitemaps.com. Sitemaps are available in 3 varieties:

XML
Extensible nomenclature (Recommended Format)

(1) This is that the most generally accepted format for sitemaps. it’s extraordinarily straightforward for search engines to analyse and might be created by a embarrassment of sitemap generators. to boot, it permits for the foremost granular management of page parameters.

(2) Relatively massive file sizes. Since XML needs associate degree open tag and an in depth tag around every component, file sizes will get terribly massive.

RSS
Really straightforward Syndication or wealthy web site outline

(1) Easy to take care of. RSS sitemaps will simply be coded to mechanically update once new content is adscititious.

(2) Harder to manage. though RSS may be a accent of XML, it’s really abundant more durable to manage thanks to its change properties.

TXT-Text File
(1) Extremely straightforward. The text sitemap format is one universal resource locator per line up to fifty,000 lines.

(2) Does not offer the power to feature meta information to pages.

2. Robots.txt
The robots.txt file, a product of the Robots Exclusion Protocol, could be a file hold on on a website\’s root directory (e.g., www.google.com/robots.txt). The robots.txt file offers directions to machine-driven internet crawlers visiting your website, as well as search spiders.

By victimization robots.txt, webmasters will explain to search engines that areas of a website they might prefer to compel bots from travel also as indicate the locations of sitemap files and crawl-delay parameters. you ‘ll scan a lot of details regarding this at the robots.txt information Center page.

The following commands square measure available:
Disallow : Prevents compliant robots from accessing pages or folders.
Sitemap : Indicates the situation of a website’s sitemap or sitemaps.
Crawl Delay : Indicates the speed (in milliseconds) at that a golem will crawl a server.

3. Meta Robots
The meta robots tag creates page-level instructions for search engine bots.
The meta robots tag should be included in the head section of the HTML document.

4. Rel=”Nofollow”
Remember however links act as votes? The rel=nofollow attribute permits you to link to a resource, whereas removing your “vote” for computer programme functions. Literally, “nofollow” tells search engines to not follow the link, however some engines still follow them for locating new pages. These links actually pass less price (and in most cases no juice) than their followed counterparts, however area unit helpful in numerous things wherever you link to AN untrusted supply.

5. Rel=”canonical”
Often, 2 or a lot of copies of the precise same content seem on your web site below totally different URLs.
The canonical tag solves this downside by telling search robots that page is that the singular “authoritative” version that ought to count in net results.

Magesh Maruthamuthu

Love to play with all Linux distribution

You may also like...