Now that you know what SEO is and what are the main factors that Google takes into account when positioning a website, you still need to learn what you have to do so that your page has opportunities to position up in the SERPs.
In this chapter, we are going to talk about how to optimize the main positioning factors as well as the main SEO problems that arise when optimizing a website and their possible solutions.
We will divide the topics of this chapter into 4 large blocks:
- ACCEPTABILITY
- Indexability
- Content
- Meta tags
1. ACCEPTABILITY
Aspects to take into account for good accessibility For website
- Robots text file
- Meta tag robots
- HTTP status code
- Sitemap
- Web structure
- JavaScript & Sis
- Web speed
- Robots text file
Example
-Meta Robot Tag
-HTTP status codes
-Sitemap
Important points that you should check regarding the Sitemap, which:
- Follow the protocols, otherwise Google will not process it properly
- Be uploaded to Google Webmaster Tools
- Be up to date. When you update your website, make sure you have all the new pages in your sitemap
- All the pages in the sitemap are being indexed by Google
In case the web does not have any sitemap, we must create one, following four steps:
- Generate an Excel with all the pages that we want to be indexed, for this we will use the same Excel that we created when searching for the HTTP response codes
- Create the sitemap. For this we recommend the Sitemap Generators tool (simple and very complete)
- Compare the pages that are in your excel and those that are in the sitemap and remove from the excel those that we do not want to be indexed
- Upload the sitemap through Google Webmaster Tools
-Web structure
Our advice is to make a diagram of the entire web in which you can easily see the levels it has, from the home page to the deepest page and be able to calculate how many clicks it takes to reach it.
Find out what level each page is on and if you have links pointing to it using Screaming Frog again.
JavaScript and CSS
Although in recent years Google has become more intelligent when it comes to reading this type of technology, we must be careful because JavaScript can hide part of our content and CSS can mess it up by showing it in another order than Google sees it.
There are two methods of knowing how Google reads a page:
- Plugins
- Command "cache:"
Plugins
Plugins like Web Developer or Disable-HTML help us see how a search engine "crawls" the web. To do this, you have to open one of these tools and disable JavaScript. We do this because all drop-down menus, links and texts must be readable by Google.
Then we deactivate the CSS, since we want to see the actual order of the content and the CSS can change this completely.
Command "cache:"
Another way to know how Google sees a website is through the command "cache:"
Enter "cache: www.myexample.com" in the search engine and click on "Text only version". Google will show you a photo where you can see how a website reads and when was the last time you accessed it.
Of course, for the "cache:" command to work properly, our pages must be previously indexed in Google's indexes.
Once Google first indexes a page, it determines how often it will revisit it for updates. This will depend on the authority and relevance of the domain to which that page belongs and the frequency with which it is updated.
Either through a plugin or the "cache:" command, make sure you meet the following points:
- You can see all the links in the menu.
- All links on the web are clickable.
- There is no text that is not visible with CSS and Javascript enabled.
- The most important links are at the top.
Loading speed
Indexability
- The number in both cases is very similar. It means that everything is in order.
- The number that appears in Google search is lower , which means that Google is not indexing many of the pages. This happens because it cannot access all the pages on the web. To solve this, check the accessibility part of this chapter.
- The number that appears in Google search is higher , which means that your website has a duplicate content problem. Surely the reason why there are more indexed pages than actually exist on your website is that you have duplicate content or that Google is indexing pages that you do not want to be indexed.
Duplicate content
- “Canonicalization” of the page
- Parameters in URL
- Pagination
- Do a redirect on the server to make sure there is only one page showing to users.
- Define which subdomain we want to be the main one ("www" or "non-www") in Google Webmaster Tools.
- Add a "URL = canonical" tag in each version that points to the ones that are considered correct.
Parameters in URL
Solution
- Pagination
When an article, product list, or tag and category pages have more than one page, duplicate content issues can occur even though the pages have different content, because they are all focused on the same topic. This is a huge problem on e-commerce pages where there are hundreds of articles in the same category.Solution
Currently the rel = next and rel = prev tags allow search engines to know which pages belong to the same category / publication and thus it is possible to focus all the positioning potential on the first page.
How to use the NEXT and PREV parameters
1. Add the rel = next tag in the part of the code to the first page:
link rel = ”next” href = ”http://www.eexample.com/page-2.html” />
2. Add the rel = next and rel = prev tags to all the pages except the first and last
link rel = ”prev” href = ”http://www.eexample.com/page-1.html” />link rel = ”next” href = ”http://www.eexample.com/page-3.html” />
3. Add to the last page the tag rel = prev
link rel = ”prev” href = ”http://www.eexample.com/page-4.html” />
Another solution is to look for the pagination parameter in the URL and enter it in Google Webmaster Tools so that it is not indexed.Cannibalization
Keyword cannibalization occurs when there are several pages on a website that compete for the same keywords. This confuses the search engine by not knowing which is the most relevant for that keyword.
This problem is very common in e-commerce, because having several versions of the same product "attack" with all of them the same keywords. For example, if a book is sold in a soft cover, hard cover and digital version, there will be 3 pages with practically the same content.
Solution
Create a main page of the product, from where you can access the pages of the different formats, in which we will include a canonical tag that points to said main page. The best thing will always be to focus each keyword on a single page to avoid any cannibalization problem.
3. Contents
Since in recent years it has become quite clear that content is king for Google. Let's offer him a good throne then.
The content is the most important part of a website and even if it is well optimized at the SEO level, if it is not relevant with respect to the searches carried out by users, it will never appear in the first positions.
To do a good analysis of the content of our website you have a few tools at your disposal, but in the end the most useful thing is to use the page with the Javascript and CSS deactivated as we explained above. In this way you will see what content Google is really reading and in what order it is arranged.
When analyzing the content of the pages you should ask yourself several questions that will guide you through the process:
- Does the page have enough content? There is no standard measure of how much "enough" is, but it should be at least 300 words long.
- Is the content relevant? It should be useful to the reader, just ask yourself if you would read that. Be sincere.
- Do you have important keywords in the first paragraphs? In addition to these, we must use related terms because Google is very effective at relating terms.
- Do you have keyword stuffing ? If the content of the page "sins" of excess of keywords, Google will not be happy. There is no exact number that defines a perfect keyword density , but Google advises being as natural as possible.
- Do you have spelling mistakes?
- Is it easy to read? If reading is not tedious, it will be fine. The paragraphs should not be very long, the font should not be too small and it is recommended that there be images or videos that reinforce the text. Remember to always think for which audience you are writing.
- Can Google read the text on the page? We have to avoid that the text is inside Flash, images or Javascript. We will verify this by viewing the text-only version of our page, using the cache: www command in Google. example.com and selecting this version.
- Is the content well distributed? It has its corresponding H1, H2 labels, etc., the images are well laid out, etc.
- Is it linkable? If we do not provide the user with how to share it, it is very likely that they will not do so. Include buttons to share on social networks in visible places on the page that do not obstruct the display of the content, be it a video, a photo or text.
- Is actual? the more up-to-date your content is, the higher the frequency of Google's crawling on your website and the better the user experience.
Solution
Cannibalization
Solution
3. Contents
- Does the page have enough content? There is no standard measure of how much "enough" is, but it should be at least 300 words long.
- Is the content relevant? It should be useful to the reader, just ask yourself if you would read that. Be sincere.
- Do you have important keywords in the first paragraphs? In addition to these, we must use related terms because Google is very effective at relating terms.
- Do you have keyword stuffing ? If the content of the page "sins" of excess of keywords, Google will not be happy. There is no exact number that defines a perfect keyword density , but Google advises being as natural as possible.
- Do you have spelling mistakes?
- Is it easy to read? If reading is not tedious, it will be fine. The paragraphs should not be very long, the font should not be too small and it is recommended that there be images or videos that reinforce the text. Remember to always think for which audience you are writing.
- Can Google read the text on the page? We have to avoid that the text is inside Flash, images or Javascript. We will verify this by viewing the text-only version of our page, using the cache: www command in Google. example.com and selecting this version.
- Is the content well distributed? It has its corresponding H1, H2 labels, etc., the images are well laid out, etc.
- Is it linkable? If we do not provide the user with how to share it, it is very likely that they will not do so. Include buttons to share on social networks in visible places on the page that do not obstruct the display of the content, be it a video, a photo or text.
- Is actual? the more up-to-date your content is, the higher the frequency of Google's crawling on your website and the better the user experience.
Tip
4. Meta tags
Title
- The tag must be in the <head> </head> section of the code.
- Each page must have a unique title.
- It should not exceed 70 characters, otherwise it will appear cut off.
- It must be descriptive with respect to the content of the page.
- It must contain the keyword for which we are optimizing the page.
Meta-description
Meta Keywords
Labels H1, H2, H3 ...
Tag "alt" in the image
conclusion
You already know how to make a page optimized for SEO and that there are many factors to optimize if you want to appear in the best positions of the search results. Now you will surely ask yourself, what are the keywords that best position my website.