SEO explained This is an article explaining three basic concepts of how modern web browsers (Google, Yahoo, Bing) obtain and qualify their search results in order to present their SERPs.

Even though this article contains some valid tips to position your website, it is more focused on the understanding of basic concepts, and tries to answers the question of what is SEO? That later will serve to understand most of the articles about SEO that can be found online, or as a criteria in hiring web creation services (to ensure that the site is suitable for search engines).

Google accounts for 90% of the searches. However, most of the concepts explained in this post are valid for all modern browsers (Live and Yahoo!, Bing). The different search engines vary in the algorithms they use to sort out the relevancy of the results they retrieve, but these concepts can be applied to all of them.

Three Main Factors of SEO

The three main factors, which have a high relevancy in relation to the search engine optimisation results, are: Indexability, Popularity and Relevance.


Google is constantly scouring the World Wide Web. It does so through a program called Googlebot. Google keeps a copy (cache) of the Internet and the websites that the Googlebot crawls. Google analyses these copies of the websites crawled and generates an index used to respond to user queries. The first step to having a good web positioning is to be in the Google index. To do this, make sure that Googlebot is able to cover your entire website and make a copy of all pages that compose it. Please note the following:

Googlebot moves following links (hyperlinks). Make sure that all pages of your site are linked from some other parts of the website. In general, Google does not take into account your website until you have at least one link from an external website to yours. If nobody is interested, you will not appear in Google results. However, there are some procedures, such as creating a sitemap of the website or to register manually your URL to allow a web page without links appear in the Google index.

It is important to bear in mind that Googlebot sees your site as would a text browser such as Lynx. Therefore the images cannot be read unless they have an alt attribute. In fact, the websites with images are more attractive to Google than websites without them. Googlebot is also unable to properly analyse Flash technology. Googlebot does not read all the texts or follow the links within a Flash movie. This has improved slightly over the years and there are some patches to the problem, yet at the moment it is not advisable to do full Flash websites. It is much more appropriate to create webpages in HTML and insert Flash movies like animations images where needed. Further, Googlebot is unable to interpret the JavaScript. The entire HTML generated by a script is invisible for Google eyes.

The content of a video can also present potential problems. If you enter a video in your website, it is recommended that the page contains text that describes what happens in it (your users, especially the blind ones will thank you for it). The most appropriate technology to design websites easily indexable is the XHTML combined with CSS for styling. Thus, web pages have the content, and the style is in a separate file (the style sheet) that Googlebot does not read.

Also you can use tools like Crawl Test (SEOmoz) or the Google webmaster tools to guide you through and find out whether Googlebot is encountering problems to crawl your site. Moreover, once the website has been indexed, two basic concepts are used to get a general idea of how search engines rank their search results. These two concepts are Popularity (Page rank) and Relevance. The goal of search engines is always to point out the relevant information related to search terms that we introduce.

Popularity (Pagerank)

All modern browsers determine the importance of web pages based on different applications of the theory of social network analysis. The basis of the theory of social network analysis is that in a particular social network, the importance of an individual can be measured by the number of citations received by other network members.

In the case of the Internet, when it comes to individuals and websites, the popularity is determined by the number of links and mentions in social networks.

The popularity of a website will be greater the greater the number of links pointing to it. Google has its own popularity algorithm called PageRank (PR). A quick SEO tip is that links pass more popularity and juice if it comes from webpages more valuable and with a higher page rank, than links from less popular sites. Therefore, it is not all about the number but the quality of links, you can have far more effect on the popularity of your website if it is to be linked by a very popular one page, than if it is to be linked by many pages with very low popularity. On the other hand, if the page that links your site has many links, the popularity transmitted is divided between them, and therefore even though it is a link from a popular site it is less valuable than a web link from a less popular on account of the number of outbound links.

The exact value of Page Rank of a site is unknown to the public, and it is usually updated very quickly (perhaps days or weeks). Its value can be measured by hundreds, by thousands or perhaps even millions, depending on the number of links that the website receives. What is known is the public value of Page Rank, which is a value between 0 and 10 that Google publishes for general knowledge. This value can be found by installing the Google Toolbar in your browser. A small green bar will indicate the public Page Rank of the page you are visiting. The data about Page Rank is updated every 3-6 months, so it need not be very reliable. If a page is “born” less than three months, is very likely to have PR 0. In general, unless you are talking to someone with deep knowledge of SEO, when people refer to Page Rank are referring to this public value. Google is determined to set the link as a unit of measure of recognition in the network. The tag rel = “nofollow” may also be useful for certain pages of your website that do not acquire much relevance in Google.


Only the concept of popularity (or Page rank) cannot explain the search results. If the Page rank was the only concept that search engines take into account, all the results showed to us will always belong to the most popular sites (more bound). We know from experience that this is not the case: often the first results for a specific search pages belong to smaller websites, but more related to the topic. In these cases, the page simply is more relevant to the search you are conducting, which also contain the words that we have introduced.

How Google determines whether a document is pertinent for a specific search? Google combines several indicators to determine the relevance of a website. Nobody except Google itself knows the exact weight of each factor to determine the relevance of a document to a specific search. However, there are a few indicators that have been demonstrated to provide relevancy to the algorithm, proved of course by trial and error. Some of them are:

Language of your browser:Those web pages written in the language that your browser is configured are more relevant to your search to those that are written in other languages. So, the first few pages of results that Google returns tend to be in your language.

Keyword-rich text: One of the main sources of information that Google has to determine whether a page is relevant to a particular search is the website itself. Google analyses the web to find out what kind of information is presented and what words are repeated more. Therefore, if we want to position a website for a particular word is appropriate for that word or phrase to appear on your website frequently. However, the frequency with which you repeat a term or phrase in the text is not as important as where the text appears (and indeed Google has anti-spam mechanisms that prevent the repeat advantage of nonsense words).

Words that appear in certain parts of the site are considered more important than the rest by search engines:

The web page title (going in the <title> </ title> and appears at the top of the browser window) is the most important element within the page. For a website to appear at the top of the SERPs is essential that the phrase or keywords targeted appear in the title of the page

Indoor titles from the website (<h1>, <h2>, <h3> …). The texts that appear in the <h1> are more important than those in <h2>, the most important <h2> the <h3> and so on.

The words that are in bold (<strong> label </ strong>) are more important than those that are not. Words and the text that appear in hyperlinks (within <a> label </ a>) of the page also are more important than the common text of the web. Do not forget that besides the text that goes between <a> and </ a>, hyperlinks have a property called title, which contains an explanatory text that appears in some browsers by placing the cursor over link. Both text and encompassed between <a> </ a> as the text within the title property is considered more important than the common text search engines.

Alternative text: Another factor which determines the relevancy of a website is the alternative text for images (which is inserted into alt- property). The importance that Google gives to the alternate text for an image depends on its size. The text of the images is a very important factor for positioning a website, particularly as they acquire a predominant role against the text. Overall, the words that appear first in the paper are also more relevant than the ones appearing later in the article. And the URL, the words that appear in the URL of the website increases its relevance to searches containing those words. Today, Google considers word breakers both bars / (marking the directories) and points. (Separating the extension or sub domains) as hyphens – (which may be used inside the names of files or directories). However, separators _ are not considered, and underscores, although Matt Cutts himself has confirmed they are working on it.

Anchor Text: The text of external links pointing to a website is probably one of the most influential factors in determining the relevance of this website for a particular search. The links a website receives is an external element that, in principle, is outside the direct control of the creator of the web. Therefore, they are much more reliable indicator of the website itself, determining the relevance of this with respect to a search. That is, if our website receives many external links text cheap furniture, this will increase their relevance to the search for the phrase “cheap furniture.” Ownership links with rel = “nofollow”, which as we have seen are not considered by Google to determine the Page Rank of web pages, they are taken into account in determining whether a website is appropriate for a specific term, in other words no follow links can affect the SERPs.

Relevance of web pages you link: Often we read SEO specialists say that the links we receive from web pages with themes similar to ours are more valuable than links from unrelated pages to her. When we talked about PageRank, we have seen the popularity of web pages that link ours was an important factor. Something similar happens to relevance: links from relevant websites make our website more relevant and more likely to appear for the search terms and keywords intended.

Moreover, internal links (received from other pages on the same website) are also taken into account (both for relevance to the search as for popularity). Choose the appropriate text for each internal link, taking into account the criteria of relevance we have explained is one of the fundamental tasks of the SEO professional.

Reasons for penalty

Remember to always have in mind the three aspects we have identified as critical to the positioning of your website (indexability, popularity and relevance), However, your site may be penalized by some of the following reasons:

A single computer can host different domains and websites. Google takes into account when a web page is getting links from the same IP address. If your site receives most of its links from the same IP address (usually several web sites controlled by the same person), can be penalized by Google for being considered a link farm (websites that are linked to each other to increase their Page Rank artificially).

Google also takes into account domain ownership. The domains are always registered in the name of a person or entity. That information is public and accessible via Whois. Google is believed to downgrade the value of the cross-links between websites owned by the same person or company. This is not a penalty but an adjustment of the value of those links. There is also speculation that Google may also assess aspects such as the antiquity of a domain and the antiquity of the links that point to it. Therefore, if your website is very new, although you are not penalized for it may cost you to reach higher positions in search results.

The cloaking is a practice whereby, using techniques of web design, show the Google crawlers to a website different than the one shown to humans. This practise is penalised by Google, and used to fraudulently increase the relevance of web search terms that do not have, or appear on the page that is displayed to users.

Finally, you can read this article about Google penalties types .


The SEO professionals often speak of positioning strategies. Strategies are needed because there is no unique way of doing things when positioning a website. The three principles we have explained (indexability, popularity and relevance) are at the heart of how search engines function, and although specific indicators of relevance, or popularity may change over time (and indeed do), the general principles remain the same.

Although there is no single way of doing things, there are some general tips that any professional would agree to give time to position your website (or blog): Create quality content: guides to explain things, or funny original videos, illustrations, product reviews, collections of links . Humans tend to bind content that seem worthy of a link. And the links are the foundation of good SEO. Simple, right? Creating quality content is the first step to any action of diffusion. Some content quality factors: utility for the user, visual appeal (design), humor, play, it’s a good story…

Find people who have similar thematic pages. Not only improve your content sharing with others, it is more likely that you get more links from the concerning area.