This section of the SEO Guide is designed to help beginners optimize the on-page elements of their web pages, and get the most value possible out of their site before worrying about getting links.
Internal linking is the process of using a website’s navigation, content, and other linkable elements to develop a logical and coherent linking structure to highlight and emphasize the importance of the keywords for which the website is aiming to rank.
Internal linking contribute to the usability and search engine friendliness of a website in a variety of ways:
- Internal links, whether text links or image links, allow for creating internal navigation links in a much more precise and less conspicuous (compared to main and side navigation) way. This helps both users and search engines find relevant and complementary content.
- If utilized properly, internal links can help create a topic hierarchy which can assist search engines in better categorizing (and assigning value) to content.
- Internal links, especially text links, help push the link value from your home (or top level) page(s) deeper into your website, which can then translate into an improved ability of those secondary, tertiary (or even deeper) pages.
There are three navigational structures (main, footer, side) which are used, in combination or individually, on websites. Each of these navigational units can help contribute to improved rankings; however, before they can contribute in a positive way, we need first make sure they are not creating problems and hindering potential rankings.
Generally, the main navigation appears below the masthead as a row of links (and often drop-downs) which allow users to navigate through the website. Since the placement of the main navigation usually results in a lot of links before any of the website content is displayed, it is important to make sure that even if the navigation does not take keyword-specific optimization into account, that it at least is not a drag on rankings. For example, if you have a large number of categories of products (services, topics, etc), do not try to stuff every single category into your main menu; this will create a link landscape on your pages which is top-heavy, and will take away from the ability of every page on your website to reach its full potential when it comes to rankings.
The secondary navigation, which can usually be found on the side of certain pages, can help relieve the load off of the main navigation by making it unnecessary to have multiple nested drop-downs which inevitably make almost every page top-heavy. Instead of having less important or sub-sections on your site have their own dedicated main navigation links, it is helpful to have those secondary links appear on the side of your pages where appropriate. For example, if you are selling widgets which are categorized by type of material, colour, and size, instead of having a drop-down menu in your main navigation that lists all those options, you can have one link in your navigation which points to the widgets page. This would give your side navigation the options for your visitors (and the search engines) to be able to dig deeper into your subcategories based on material, colour, and size.
Naturally, not every possible variation can be covered here, but the above example should give you a good idea as to how you can utilize your side navigation to ease the burden off of your main navigation.
Internal text links are a method of interlinking pages within a single domain. If implemented properly, internal text links can not only contribute to better search engine rankings, but to also make your website more user-friendly by allowing in-text navigational options for site visitors.
The text link is different than a traditional navigational link, and appears within textual content, as opposed to a formatted navigational block.
Structure of a Text Link
A text link is made up of two basic parts, the anchor text (the text which is linked – generally appearing as a different colour than the surrounding text and/or highlighted by an underline), and the destination link (the URL/page to which the link is pointing).
How to Use Text Links
Thinking about text links as textual markers to additional information or resources will help you to not only use them properly, but perhaps more importantly to not abuse them. Text links are words or phrases within sentences and paragraphs which link to appropriate and relevant content which helps expand on a subject without encroaching on the current line of thought or information. For example, if you operate a website selling SEO, (just like me), on a particular page you may talk about how beneficial it is to have local SEO, and within that content you make a mention of the importance of maintaining your SEO. Instead of having an entire section about SEO maintenance on the same page where you are touting the benefits of local SEO, you can link from that page to the SEO maintenance tutorial page (assuming you have one–you should have one if you don’t already). This will help guide both users and search engines to additional information you have published on your website about SEO.
SEO is extremely useful for online business; however, to make sure that the SEO performs at highest efficiency, it is important to set an SEO maintenance & strategy plan outlining the deliverables.
Link Destination Consistency
Link destination consistency refers to a website’s internal linking structure and the idea that the target pages for specific keywords should not change from link to link. For example, if you are linking the word ‘Ireland SEO’ five times from multiple areas on your site, all those links should point to the same page on your site, instead of a few pointing to ireland-seo.html and a couple to local-ireland-seo.html. This helps promote the relevancy of ireland-seo.html to the keyword ‘Ireland SEO‘. Of course, if there are abbreviations, contractions, or other representations of the same topic on your site which are being linked, then those should be pointed to the same destination as well.
Anchor Text Variety
Just like the importance of being consistent, it is essential to have natural variety as part of your search engine optimization efforts. This is doubly true for your link profile, both internal and external. Since you have absolute control over your internal links, it is important to take full advantage of this opportunity and make sure that there is natural variety in your anchor text.
Anchor text variety is the process of using similar but varied anchor text for linking to pages dedicated to the subject topic. For example, your website may have a page about ‘SEO services,’ and of course your goal should be to link to that page consistently with keywords that describe the content closely; at the same time, it is important to make sure that not every link pointing to the ‘SEO services’ page has the same anchor text. Let’s say that you have five links from across your site pointing to your ‘SEO services’ page. Below are examples of what a varied link profile would look like (as compared to five links with ‘SEO services‘ as the anchor text).
- SEO services
- SEO Ireland
- SEO services Ireland
- SEO service
- affordable SEO services
As you’ll note, these are very similar terms, but they introduce variety into the internal link profile which doesn’t make your internal link effort look forced and spammy.
Though they are sometimes neglected, images can be an important part of search engine optimization efforts. Images can be anything from a submission form button to a panoramic picture of a vacation destination. Each image on a website has several ways in which it can be optimized to help contribute to the search engine optimization process.
Image File Name
The file name of the image can be used to identify what the image is about. For example, if the image is of a consultancy business, then the image file could be named ‘seo-consultancy-business.jpg’. Of course, you have to keep in mind that though this practice can be useful, it can also be detrimental to SEO if it is overdone – having a file named ‘seo-consultancy-business.jpg’ is perfectly fine, but naming it ‘seo-consultancy-business-is-great-for-doing-what-seo-consultancy-business-are-made-for.jpg’ is not. As a rule of thumb, use the minimum number of words in a file name necessary to describe what the image contains.
Image Alt Attribute
The image Alt Attribute was designed to allow web developers to including a description of the image within the code in case an image is not available to load, or if the browser being used to view the page is not capable of displaying images (i.e., text-based browsers, and screen readers).
Here we can see that the alt attribute added to the SEO consultancy business image describes the image in a succinct yet useful way. Some variations the alt attribute that are better left unused:
- alt=”Best prices on SEO consultancy business” (this does not describe the image)
- alt=”This is the picture of an SEO consultancy business” (it has extraneous and redundant information)
Don’t stuff your alt tags with keywords, but instead, try to make them as useful as possible. Another way of trying to decide which alt attribute is good and which is bad is to realize that screen readers (tools for making web pages accessible to the visually impaired) utilize the alt tag to provide a description of the image to the user. So make your alt attributes are short and descriptive, so that if you could not see the image and the alt attribute was reading to you outloud, the description would be useful.
Image Title Attribute
The Title Attribute is a combination of HTML and wording added to HTML tag (e.g., images, links, block level elements) and provides supplementary information (in the form of a small pop-up) when the user’s cursor hovers over the subject element. For example, if you have a ‘Contact Us’ button on your page, you can use a Title attribute to briefly explain what type of contact form the user will see when they click on the button.
Images can be used to create navigational links within a website, and can contribute to the optimization of the internal linking structure. Since images have no textual information that the search engines can ‘read,’ the search engine spiders use the Alt (and to a lesser degree the Title) attribute to determine what the image link is about.
The title tag is a required part of all HTML/XHTML documents, and plays an important part in both user experience and search engine optimization.The title tag’s intended use is to provide a concise and accurate description of the content on a webpage and plays an important part in improving user experience . By including relevant keywords, webmasters can make a significant impact on a website’s ranking; making descriptive and keyword-rich title tags one of the most important aspects of on-page search engine optimization.
The title tag is displayed to users in both search engine results, and almost every major browser, including Google Chrome, Internet Explorer, Safari, Firefox and Opera. In a browser, the title tag is displayed either at the top of the window or in each active tab.
Like with the meta description, search engines will only display a limited amount of characters in search results, and page titles that exceed that limit will be shortened and an ellipsis (…) will replace the clipped content. Webmasters and website owners can avoid this cut-off problem by limiting page titles to 70 characters or less. At the same time, it doesn’t mean that every single title should try to use the entire space available – the title can be short as long as the it is descriptive and informative.
Search engines like Google, will bold any matching keywords and phrases from a user’s search query, giving users greater visibility when making searches.
The best way to create effective page titles starts with creating a list of relevant keywords for a page. Once keywords have been selected, webmasters should create a page title that reflects these keywords. Page titles should ideally be formatted in one of two ways:
- Primary keyword – Secondary Keyword | Brand Name
- Brand Name | Primary Keyword & Secondary Keyword
Note: It is not absolutely necessary to include a brand or company name in the title since they will take up space; however, doing so is good for branding purposes.
The more important a keyword is, the closer it should be placed to the beginning of the page title. Creating page titles that reflect the order of keywords on-page is also beneficial to search engine optimization. For example, if a page has a section for affordable tables, one for blue tables, then one for wooden tables – an appropriate title could be “Affordable, Blue, Wooden Tables”.
Meta tags are HTML elements which provide meta data about a web page. Two of the major meta tags are descriptions and keywords. Each meta tag has a specific function and can be used to provide search engine spiders or web browsers information about the content or structure of the page. Some meta tags can be a minor but useful part of successful on-page search engine optimization.
A meta description is an HTML element designed to help provide an explanation of the webpage’s content. Search engine result pages, or SERPs, display meta descriptions as a preview or snippet of the information contained on a webpage.
Social networks like Google+, LinkedIn and Facebook pull information from meta descriptions when users show or share content.
Clear and concise meta descriptions allow users to accurately determine what information is on a given webpage. When a user enters a search term that pulls up a related webpage, a helpful meta description will often lead to an increase in relevant click-throughs. In terms of search engine optimization it is better to think of meta descriptions as a conversion factor, something that will entice users to visit a page, rather than a method to raise rankings.
By extension, every meta description should be unique to the page that it describes. Copying meta descriptions from other pages and reusing them is not helpful to users and is certainly not going to have a positive impact on organic rankings.
Meta descriptions may be any length, but search engines, social networks, and social bookmarking sites generally shorten the length of the description to between 150 and 160 characters long (including spaces, quotes and punctuation). To ensure that a meta description will not get shortened by search engines, try to keep the page summary concise and avoid excessive punctuation that might push the description over the arbitrary character limit.
A meta keyword is an HTML element which was originally designed to help search engine spiders evaluate the content and relevancy of a page to a search query. However, nowadays, the search engines no longer use the meta keywords for this purpose as it was widely abused to the point where it became unusable as a ranking signal–the search engines eventually stopped assigning any value to the keyword meta tag. So you can safely forego developing keyword meta tags during the optimization process; but if you decide to include them anyway, follow the below rules:
- No Keyword Stuffing: don’t just write a large list of keywords or keep repeating the same keywords over and over.
- No Massive Keyword Lists: don’t include every keyword you can think of on every page – only include a handful of relevant keywords for each page.
- No Repetition: don’t use the same keyword meta tag on every page of your site
Headings & Sub Headings
Headings (and subheadings) help provide structure to your content by organizing and breaking it down into smaller sections. Heading tags range from H1 through H6, with H1 being the largest in size. The size of the heading directly correlates to its importance both for readers and search engine spiders.
Headings (H1-H6) can and should be thought of as titles and subtitles (or headings and subheadings), and should be used exactly as they were intended. The most important topic of your page would have an H1 heading, followed by a subsection of that topic which should have an H2 heading, etc. This structure assumes that you have subsections to your topic on a particular page; if you do not, then of course you would simply use an H1 as the main heading and leave it at that.
Each page on your site should only have one H1 tag. Just because H1 headings are deemed the most important on a page, it does not follow that by putting all your headings (and worse, all of your content) in an H1 format your page will be deemed more relevant. This type of activity will not help improve your website’s chances of obtaining better rankings, and may even be a negative force (due to its spam quality).
Just like the title and meta tags, the headings should be descriptive; however, avoid headings which consist of a list of keywords. What you should aim for, is a heading that captures the information contained on a given page in the most general form possible, while making use of keywords and key phrases where it is practicable to do so organically.
Text Emphasis – Bold (strong) & Italic (em)
Bolded or italicized text are used to emphasize or draw the eye to important concepts, ideas or information in the content of a webpage. Webmasters can achieve the look through the use of the use of four HTML tags: <i>, <em>, <b> and <strong>. The <i> and <em> tags display text in italics, and the <b> and <strong> tags display text in bold. Each of these tags provide a little bit of value to search engines – for those keywords that are emphasized – while serving a purpose, visually, in the content of a webpage.
The key difference between the <i> and <em> tags lie in presentation versus structure. To the average user, the tags could be used interchangeably as both display text on a screen in italics. A screen reader approaches each tag differently, however. When reading content, a screen reader will overlook the <i> tag as a visual element. When reading the <em> tag a screen reader will say the text with emphasis.
Like with the <i> and <em> tags, both the <b> and <strong> appear similar when displayed on a screen. When read by a screen reader, the <b> tag will be overlooked, while the <strong> tag will be spoken to a user with strong emphasis. The <b> tag is presentational; the <strong> tag is structural.
The alt attribute is used to note alternative text (alt text) information to be rendered when the elements to which the alt text has been applied are not available to be rendered (e.g., missing image file, missing video file). The alt text is also utilized by screen readers (generally used by the visually impaired).
A visually impaired reader will hear the alt text in spoken language so the description should be used to impart the exact information that the image is meant to do. For example, if the image in question is of a flag, and the flag is being used not necessarily to refer to a country, the alt text can read something along the lines of “A flag blowing in the wind.” However, if the image is meant to refer to a specific country flag, then the alt text could read “The Irish Flag.” So the alt text is based on the intention by which the image was used.
For search engine optimization purposes, the alt text can be used to not only describe the intention of the image, but to also incorporate keywords which are relevant into the alternative text. It is important to remember that the alt tag is not to be used to simply include a list of keywords or a sentence stuffed with keywords.
The title attribute is used to provide additional information about an HTML element such as an image or link. The title attribute usually shows up in browsers as a tooltip which pops up when the cursor hovers over the element which has a title attribute. Though the title attribute does not have any direct impact on rankings, it is important to avoid doing anything that could at some point harm rankings, so do not use the title tag to stuff keywords in the HTML code. The title attribute should complement the alt text, so don’t just repeat the same text in both attributes. Even though the title attribute does not have a direct impact on SEO, it can be a helpful tool for users, by providing incentives for additional clicks and longer time spent on a website which can indirectly impact a site’s rankings.
The idea here is exactly as the heading describes, to clean up your HTML code so it is as efficient and lean as it can be. There are a variety of basic changes you can undertake to help make your pages have a better content to code ratio (the less code as compared to content, the better).
- Optimize image sizes for faster load time – Everyone wants to have great looking images on their web pages, but those images should not interfere with the quick loading of pages. It doesn’t help to have high resolution images which are then reduced in size to fit on your page. All this does is make your pages load slower. So instead of having large images, which are resized for use, why not use images that are the exact size that you need? Also make sure you are using the appropriate file format for the situation, so avoid using a .BMP format when a .JPEG, .GIF, or .PNG would do just fine.
- Avoid using tables – For a while now, web developers have moved away from using tables and are using CSS based site structures instead, which cuts down on the amount of code on the page. However, there are still some web developers who have either not acquired the new skills necessary to work with DIV elements and CSS to construct their pages, or have not yet updated their clients’ websites to eliminate the use of tables for general website layout. It could also be the case that you developed your website yourself and need help to make this transition. Regardless of the reason, it is important for SEO purposes to move away from table-based layouts to help slim down your HTML code. This will not only help load the page slightly faster, but also reduce the content to code ratio, which can positively impact organic search engine rankings.
- Eliminate unnecessary code – Remove any part of the code that is not necessary to make your pages function properly. For example, if you have excessive commenting (especially if it was used to insert keywords into your code), then trim down the comments to leave only the essentials. Another thing to look for is the excessive nesting of DIV layers, paragraphs, and other HTML elements which generally come along with the lack of mastery over CSS. So eliminate as much nesting of code as possible to reduce the amount of HTML on your page.
- W3C validation – Though this is not crucial, and your site can rank decently without a properly validated HTML, a W3C validation will help you eliminate a lot of the problems that creep up on developers and website owners. So try to make your pages standard compliant when possible.
Use the W3C Markup Validator to validate your code.
The URL has been used and abused by those trying to do SEO for a long time; however, making your URL´s contribute to your SEO efforts requires adherence to a few important rules. These rules will not only help you avoid any problems (i.e., creating spammy URL´s), but also create optimized and user-friendly URL´s that will contribute positively to Search engine Optimization.
- Short URLs are better – the shorter the URL the more useful and helpful it will be. Keeping URL´s structure clean will also help you avoid creating a spammy URL.
For example: instead of using www.example.com/widget-category/red-widgets-los-angeles-ca.html it would be better to have www.example.com/widget/red-widgets.html.
- Avoid using dynamic URL´s – but if you must, try and keep them to one or two parameters.
Use hyphens to separate words – Only use hyphens to separate words in URL´s. Do not use underscores, spaces or any other character. This will not only help make the URL easily legible for users, but it will make it easier to decipher for search engine spiders. But do keep in mind to avoid long URL´s with multiple hyphens, as it will make the URL seem/be spammy.
- The fewer directories the better – Avoid creating a new directory in your URL if you can. This does not mean that every page should be in your root directory.
For example: instead of having www.example.com/category/widgets/color/red.html you can benefit by using www.example.com/widgets/red.html.
- URLs should be descriptive – Try to keep URL´s as descriptive as possible while maintaining brevity.
For example: instead of using www.example.com/product-cateogry-5/product-id-1234.html, use www.example.com/widgets/red-wooden-widget.html.
Off page SEO refers to factors which can impact a website’s rankings that do not involve the modification of the website. This, in most part, involves link development. So let’s see what links and link development are all about.
What is PageRank®?
PageRank® is the fundamental algorithm on which Google’s rankings are based; however, it is no longer (and hasn’t been for a few years) the overwhelming factor in a page’s ability to rank. There are two types of PageRank®, one which Google uses internally and is a rational number between 0 and 1. The other is the public PageRank®, which is what used to show up on the Google Toolbar, but is now only accessible through third-party browser plugins or extensions, and is a number between 1 and 10. In both cases the higher the number the more ‘powerful’ the website.
If you’d like to learn more details about Google’s PageRank®, you can find a detailed article on the American Mathematical Society’s website: How Google Finds Your Needle in the Web’s Haystack.
A web page’s PageRank® is calculated based on the incoming links from external pages (either on the same site or third-party websites). Each link from another page is counted as a vote; however, not every vote has the same value and impact on rankings. The PageRank® of a web page is dependant only on link value and absolutely nothing else. The Google algorithm may decide to not assign the value of inbound links to a particular page; however, whatever value it does assign will be based on links, and not content or any other criteria.
To learn more about the inner working of PageRank®, see The Google Pagerank Algorithm and How It Works (external)
Since links are important to a website’s ranking ability, it is an important aspect of search engine optimization. Technically, and at the most basic level, the more incoming links a web page has the higher its PageRank® will be. Of course, the PageRank® of those pages linking to the site in question is also important – the higher the PageRank® of the linking page, the more value it can pass on to the site to which it is linking. But don’t go running after high PageRank® links yet; there is more to it than that. Check our Link Development Guide before doing anything; it will save you a headache.
Note: Just cross-linking the pages of your own website will not increase your PageRank® (it will just pass it around from page to page).
Even though on-page optimization is very important, inbound links are crucial for pushing a website to the first page of search engine result pages. With that said, it is important to know that not every link is a good link, and being able to recognize good from neutral or bad can make or break (and sometimes obliterate) a site’s rankings. Here we will discuss the types of links, structure and formatting, relevance, and placement.
Types of Links
- Image Links: These are links which do not have any anchor text and are comprised of just an image being linked to the destination URL. The image link can be valuable, but it is not as desirable with a text link, keyword rich (this does not mean keyword-stuffed) anchor text.
- Text Links: Text links, as the name suggests, are the traditional type of link that everyone is used to seeing on every webpage. Text links are highly desirable as they allow for specific keywords to be linked to the destination URL. This is advantageous because the search engine algorithms assign more relevancy to the link if the anchor text is in line with the content of the page to which it is pointing.
Link Relevance & Origin Quality
We cannot overemphasize the importance of the relevance of the page from which the link originates, to the page to which it points. The relevance of the page and the quality of the site which it is a part of, are the factors that can make a link highly useful or potentially disastrous.
The relevance of a link is determined by the content of the site (or page) from which it originates as compared to the content of the site (or page) to which it points. So the closer the topic of the page/site from which you are getting the link is related to the page/site to which the link will be pointing, the more value the search engine algorithms will assign to that link. As a result, the rankings of the page which is being linked to will see more significant improvements in rankings than it would otherwise.
Link Origin Quality
Aside from relevance, the quality of the page from which the link originates also impacts the value that it is able to pass along to the destination page. In this case quality does not refer to how technologically advanced a page/site is or how intricate and awe inspiring the visual elements are, but rather, it refers to the quality of the content on the page which is in turn determined by the quality and relevance of the links which the linking page/site has earned.
Note: You can learn more about link development techniques and we advice you looking at The Advanced Link Building Guide at IrelandSEO.net Blog.
“Universal Search” is a feature of Google search which pulls data from multiple data sources to display on the search engine result pages (SERP´s). This includes videos, images, news headlines, and local search engine results. Universal Search is designed make search results more user friendly, broadening the results from simple text only, so that users can view all the cross-media search results on one page.
Images include anything from a submission form button to an actual photo image. When adding images, it is necessary to optimize your images by employing the “alt” attribute. This allows useful text to be provided in the event that the image does not load properly. To further understand how this is done, refer back to section on Image Optimization for an in-depth look at this function.
In Universal Search results, videos have an approximately 41% higher click-through rating than their plain text counterparts, meaning that this area should be a key focal point in marketing.
Basic Video SEO
- Ensure the quality of your video content does well enough to rank in the first place. (Note: Nearly 100% of videos returned in universal searches also ranked on the first place of their native platform, so if your video is not yet ranking well on its home page, it is likely you will not rank in a universal search.)
- Host your video on the most frequented platforms, such as Youtube.
Keyword Dos and Don´ts
- DO NOT USE transactional keywords (such as ‘buy’, ‘cheap’, ‘free’ and ‘sale’ etc.) or navigational keywords (such as brand descriptions, names, site URL´S, etc) as these tend to signal a more spammy quality.
- DO USE informative words like ‘how to’, ‘learn’, ‘what is’ and ‘history of’ as these types of terms aim to address a question or solve a problem and tend to rank better.
Local Business Listings
Local business listings (on Google it is called Places for Business) allow businesses to have a presence which stands out from the rest of the organic results. This distinction is based on many factors that Google takes into consideration (e.g., if the search term indicates an intention by the searcher to find a local business).
The Local Business listing not only includes a link connecting to the homepage of the business, but it also makes information accessible to search engines, such as user ratings, business location, categories of business, etc.
There are many factors that Google takes into consideration, most of which are not publicly known, when assigning rankings to local results. One such factor is the rating that a business or business location has received from Google users – naturally the better the reviews, the better the rankings (generally). Google (and other search engines to a certain extent) utilize other signals such as local citations, and proximity to the city center when it comes to local rankings.
Tip: One way to further help rankings is to have positive reviews and star ratings from clients who rank the company on Google. Google also considers rating from other sites such as Yelp, which also increases ranking.
In order for a company’s site to be included in Google News searches, they must apply by submitting their site for consideration. Google has a list of guidelines suggested for companies to review before submitting their website for inclusion. Websites are accepted based on the content of their news reporting, the frequency and authority with which they publish articles, and the number of posts and users, etc. In addition, there are technical guidelines that reflect whether or not the site is user-friendly, and whether Google’s computer algorithms can crawl the site. To learn more, please visit the Google News inclusion requirements.
Website Content, Structure & Organization
To strategically organize web content, one of the primary question to ask is: “Is the site easy to
navigate and can users find the information they seek?”
To help improve the chances of your website ranking higher in the search engines, it is important to ensure that the site is easy to navigate. This is accomplished by arranging information in a logical pattern, by thematically arranging content silos through URL structures and internal linking by applying appropriate keywords for page theming, etc. This gives structure as opposed to scattering random articles and disorganized thought-flow on the site. The purpose of good website organization is to ensure solid structure and consolidation of content in a logical pattern, so as to be straight-forward to navigate, both for users and search engines as they crawl and index your page(s).
Topic Scattering Analysis
Topic Scattering Analysis is the process of analyzing the existing content on your website to identify areas where content for specific topics is scattered across multiple pages/sections, and purposefully organizing that content into larger cohesive units of information. The analysis begins by checking whether or not the topic is being covered in multiple silos or sections of the site, then organizing the information and content in a way that makes sense and is easy to digest. Topic Scattering Analysis is crucial for consolidating the information and content on your website and providing search engine spiders logically structured information to digest.
As you evaluate topics on each page, analysis must be made to see if there is any opportunity to merge the information onto one location. For example, if you have a page covering details on a wooden table, and on another page with details about a red table, it might be helpful to combine and consolidate the information into one space, if the content does not benefit from having two distinct pages for fairly similar information. Combining content enables Google to consolidate ranking signals on important pages and can crawl your site more effectively.
Themeing Evaluation (Siloing)
Themeing or siloing refers to the organization of a website’s content by concentrating related topics within a well-thought-out directory structure which houses content that targets keywords with progressive specificity.
- www.example.com/widgets : A section which would target top-level and generic keywords about widgets.
- www.example.com/widgets/counterfeit.html : A section which would target secondary keywords having do to only with counterfeit widgets.
- www.example.com/widgets/counterfeit/how-to-recognize.html A very specific page targetting keywords which have to do with learning how to identify counterfeit widgets.
This exercises the same concept of logically arranging information on your website, except instead of consolidating data onto one page, it analyzes where separation of content can be made. In the case where too much information is being packed onto one page, you have to evaluate the content distribution and see if there might be a better placement on the site. Analyze which topics might be expanded, which subject you could be separated onto different pages, for a more user-friendly website.
Search engines consider every URL to be a unique object or page. Every instance of duplicated content, regardless of the purpose of the page, will negatively affect rankings if it is allowed to be crawled by a search engine. It is sometimes necessary to have two (or more) pages with the same content; however, even if the content is helpful to users and makes sense, it’s presence in the search engine indices will cause ranking problems. It is recommended to exclude exact (or even similar) copies of any content from the search engines, or if possible to avoid having duplicate content to being with.
Duplicate content can be caused by a number of things, including URL parameters, printer-friendly versions of pages, session IDs, and sorting functions. These kinds of pages tend to be a normal, helpful part of a website but they still need to be addressed in order to avoid serving a duplicate page to the search engines. There are several recommended methods one can go about in fixing duplicate content: 301 redirects, the rel=”canonical” tag, robots.txt exclusions, and noindex meta tag.
A 301 redirect, or permanent redirect, sends both users and spiders who arrive on a duplicate page, directly to the original content page. These redirects can be used across subfolders, subdomains and entire domains as well.
The rel=”canonical” attribute acts similarly to a 301 redirect, with a few key differences. The first being that while the 301 redirect points both spiders and humans to a different page, the canonical attribute is strictly for search engines. With this method, webmasters can still track visitors to unique URLs without incurring any penalty.
The tag which can carry the canonical attribute is structured as follows.
Example: <link rel=”canonical” href=”http://www.example.com/original-content.html” />
The <link> tag would be placed in the <head> of the HTML document which needs to assign attribution to the page which the search engines should deem the original.
Webmasters can also exclude pages from search engines through the use of a noindex meta tag on specific pages. Using the noindex meta tag, webmasters can ensure the content of that page will not be indexed and displayed in the search result pages.
- <meta name=”robots” content=”noindex” />
- <meta name=”robots” content=”noindex,nofollow” />
The final recommended method involves using a robots.txt file. Using robots.txt, webmasters can provide directives to search engine spiders to keep them from indexing certain parts of a website. The URL of these pages may still show up in some search engine indexes, but only if the URL of the page is search for specifically.
Tip: While official search engine bots (spiders) will follow robots.txt protocol, malicious bots often ignore them entirely.
If placed within the robots.txt file the below directive would prohibit the bing spiders from crawling and indexing the ‘widgets’ directory.
Thin content describes, both, pages which have very little content, or pages which may have a lot of content of little value. The latter is more accurate a description as there can be pages with very little content which are useful (i.e., if a topic only takes a few sentences to cover/describe, then it is not necessary to generate a encyclopedic volumes of content for it).
According to Matt Cutts, the head of Google’s web spam team, thin content contributes either very little or no new information to a given search. This problem is particularly common for e-commerce sites that may have hundreds or thousands of pages for different products with only minimal product details and information.
The best long-term solution to this problem is simply to create unique content for every web page which might contain duplicate or lackluster information. By supplementing repeated information with sections of unique text, like a thorough description, review, opinion, video, or brief editorial, webmasters can increase their website’s relevance to search engines.
The canonical can be used to help avoid creating duplicate content by specifying the original publication page of a piece of content. One specific use for the canonical tag would be on a page which lists products, and that has sorting functions which produce different URLs depending on how the products are being sorted. In this case, any variation in sorting from the default presentation can utilize a canonical tag to indicate that the the original URL is the only one that should be indexed.
For example, if a webpage is listing a variety of widgets, and has the URL www.example.com/widgets.html, and offers a sorting link (www.example.com/widgets.html&sort=price)to allow for the widgets to be sorted based on price, then it will become necessary to utilize the canonical tag on www.example.com/widgets.html&sort=price to indicate that the original content is housed at www.example.com/widgets.html, and that www.example.com/widgets.html&sort=price should not be indexed.
The canonical tag on www.example.com/widgets.html&sort=price would look like this and be placed in the <head> of the document.
<link rel=”canonical” href=”http://www.example.com/widgets.html”>
Website Design, Development & Structure
Responsive design is an approach to website design that allows users to view all of the content on a certain site, regardless of platform (agnostic). Responsive web design, at its most basic, is a combination of adjustable screen resolutions and resizable images that can be stretched, squashed or even overlapped to allow users to navigate a website without having to zoom in and out to see the entire content of the page if they are on a mobile device or tablet.
Responsive design achieves this flexibility through a number of means–most importantly by dividing portions of a website up into fluid grids and flexible images. When a user accesses the website using a device with a wider screen, like a personal computer, the elements within the grid will expand to fill the new area. Similarly, when the screen is smaller, like the display of a mobile phone, those same elements can decrease, becoming narrower, or even be re-organized entirely while still displaying the same information.
Responsive website design allows webmasters to avoid creating separate pages for PC, mobile phone and tablet users. More and more devices have mobile access to the internet, making the use of responsive web design increasingly beneficial, and often necessary. Utilizing responsive design can also help prevent lowered ranking by avoiding the serving of duplicate content to the search engines.
Despite its many positive features, employing responsive website design may not always be the best choice for webmasters and website owners. Implementing this sort of web design takes a significant amount of time, technical and development know-how and, often a team of designers to execute it properly. For smaller websites with lower budgets, responsive web design might not be the correct choice. Websites that display a particularly large amount of content can also have difficulty with responsive website design, as heavy content quantity can be difficult to insert into resizable grids. Also, if a website has a complex user interface or navigation system, such as Amazon, it becomes increasingly difficult to resize the screen appropriately.
Flash is one of the technologies that became popular in the 2000’s and caused a lot of havoc for organic search engine rankings, as websites solely based on Flash could not provide what the search engines needed to properly evaluate the contents of the site/page. So, if you are going to use Flash, make sure that your entire site is not comprised of Flash, and that you are only using Flash components that are not going to take the place of your main content.
CSS Instead of Graphical Menus
Images are powerful tools for making a visual impact; however, it is important to make sure that it is not at the expense of the search engine friendliness of the website. So instead of using full image-based menus, why not utilize cascading style sheets, to not only make your navigational menus visually impressive, but also search engine friendly.
Using a content management system (CMS) can make the life of a site owner much easier, and, once set up, require less support from a webmaster for maintenance and updates. However, picking the wrong content management system, or setting it up in a manner not conducive to good search engine rankings, can be detrimental to the website’s ranking ability. So it is important to make sure to pick the right content management system for your needs while keeping its search engine friendliness in the forefront of your decision-making process. As a hint for anyone uncertain, there is almost never a good reason to not pick WordPress as a content management platform, unless you would like to develop your custom system. Making your own custom CMS can be costly, but will give you exactly what you need/want without much (if any) bulky code.
Search engines go through web page HTML from top to bottom and from left to right, so it is important to give the crawlers as much content as close to the top of your HTML code as possible. This way, the crawler does not have to wade through extraneous code to parse out the content which it is going to use to determine what the page is about, which will ultimately impact the site’s ability to rank well in the search engine result pages.
Website & Server Performance
Even though Google harps on about high quality content, there are other aspects of your website and hosting environment which can contribute to the rankings of your website in search engines. Some of what we will cover below is easy (but can be tricky) to implement, and other issues may require revisiting the entire design and architecture of your website.
Load time refers to the amount of time it takes for your server/webhost to serve a requested page from your website by a user or search engine spider. Generally, the load time is in milliseconds; however, if your server is not able to serve pages quickly enough it will cause problems, both with rankings and bounce rate (people leaving a site after looking at only one page–in this case assuming they even wait long enough for your page to load). So it is important to test how quickly your hosting service is able to serve up your web pages. Keep in mind that the problem may not be your host but the way your site is set up. You can use a multitude of tools to check the speed with which the pages of your site load, or utilize the Google Webmaster Tools to find out how quickly Google is able to load your website’s pages. You will find this information at Webmaster Tools > Crawl > Crawl Stats. You can also use Google’s PageSpeed Insights developer tool to pinpoint some of the issues which may be causing the slowness of your site.
Once you have checked to see how quickly your web pages load, you will have to use that information to figure out your next step. If the pages are loading in a reasonable amount of time (less than one second per page), then you do not have much to do, unless you would like to improve your site’s speed even more. However, if your pages are taking a few seconds to load, then you certainly have some work to do to get the load time down to below one second per page. Here are some of the things you can do to improve load time:
- Minimize HTTP requests
- Optimize your images so you are not using unnecessarily large image files without gaining in visual quality
- Use server side caching & Gzip to reduce render time and file size
- Reduce 301 redirects
- If possible, use a content delivery network
The robots.txt, which is a creation of the Robots Exclusion Protocol, is a file stored in a website’s root directory (.e.g., example.com/robots.txt), and is used to provide crawl instructions to automated web crawlers (including search engine spiders) which visit your website.
The robots.txt file is used by webmasters to instruct crawlers which parts of their site they would like to disallow from crawls. They also set crawl-delay parameters, and point out the location of the sitemap file(s).
Note: Not all web spiders follow robots.txt directions. Malicious bots can be programmed to ignore directions from the robots.txt file.
Users are redirected to a 404 page when they have tried to locate or reach a page that longer exists, often because they have either clicked a broken link or mistyped the URL.
By default, a 404 page is just an empty page (save for a small notice stating that the page that has been requested could not be found), which usually means that the user (and search engine spiders) will simply turn back and leave the site, and this is certainly not something you want happening. To solve this issue, and avoid 301 redirecting all error pages to your home page or some other destination on your website, you can utilize a custom 404 page which gives the users (or search engine spiders) links and navigational elements to follow into your site. A custom 404 page could simply be smaller version of your website’s sitemap which directs the user or search engine spider to the main sections of your site.
Of course, not every 404 page should be redirected somewhere, and that is where a custom 404 page would come into play. These 404 pages can be shown by default, but creating a customized 404 page is highly recommended.
Custom 404 Error Page
A 404 error is what gets reported to browsers or crawlers when a URL (webpage) that has been requested does not exist. Having too many 404 errors can be detrimental to organic search engine rankings. Depending on the URL which is producing the 404 error, it might be helpful to 301 redirect the non-existent URL to an appropriate location. For example, if someone has linked to your widgets page (www.example.com/widget.html), but has mispelled the URL (www.example.com/wigdet.html), and that link is sending you a lot of clicks, then it would be a good idea to 301 redirect …/wigdet.html to …/widget.html, so that you are not serving an unnecessary number of 404 page, and also capture some of the link value from the link.
When a user manages to stumble across a 404 error, the default 404 page does very little to help guide the user along to a useful section on the website–the only available option is to click the back button. Without a place to move forward, or even a link to the homepage, there is no way to help users find what they are looking for.
A default custom 404 page is equally unhelpful for search engine spiders. These spiders follow a specific formula while indexing websites: they follow every available link, effectively creating a web or network of information. When a spider runs into a default 404 page, like their human counterparts, they are unable to move forward. Unlike human users, however, spiders are not able to execute complex actions, like hitting the back button. When a spider ends up on these pages, it is unable to proceed with indexing your website. A custom 404 page gives spiders something to follow even if they run into an error.
When creating and designing a customized 404 page there are a number of important things to keep in mind. The most important factor of an error page is to clearly inform a user that the page they were looking for has not been found, and apologize.
Offer a way for users to get back on track, a way to contact the webmaster about the problem and include a link to the homepage. Remember to use meaningful anchor text, however, simply writing “Home” is not recommended. Also consider including additional links to popular pages or articles on the website. Finally, remember to ensure that your custom 404 error page passes on a 404 error code to search engines to prevent them from indexing it.
A redirect is the action which forwards one URL to another. For example, if a particular page on your site has had a file name change (e.g., from www.example.com/widgets.html to www.example.com/affordable-widgets.html), then it would be necessary to redirect the old URL to the new URL. There are two types of redirects which we will cover here–the 301 redirect, and the 302 redirect.
A 302 redirect is a temporary redirect which passes no link value, and is something to be avoided for SEO purposes. This redirect tells the search engine spiders that the page that was requested has moved temporarily, but will be made available again in the future; for this reason, the search engine spiders/algorithm do not allow link value to flow through a 302 redirect.
A 301 redirect is a permanent redirect which passes most of the link value from the old to the new page, and is the recommended method of redirecting old/changed URLs to their new destination.
Tracking & Analytics
For website owners who want to track traffic to and on their site, search engines such as Google and Bing provide certain programs to enable the owner to do just that. Google Analytics, Google Webmaster Tools, and Bing Webmaster Tools all use visual data and activity reporting to give the site owners useful data so as to inform them on how to develop or improve their website.
Google Analytics is a website traffic analytics tool which allows site owners to track and analyze traffic and user behavior on their website. This is an essential part of understanding a site in relation to SEO, by finding out what is working and what is not.
Google Analytics (and other similar analytics tools) provide highly detailed information which includes number of visitors, what people are doing on the site (e.g., the pages which they visit, how long they stay on the site, how they get to the site, etc.), giving webmasters the understanding and insight they need to make effective decisions about how to improve their site, both for users and search engines. More information can be found on the Intro to Google Analytics Help Page.
Google Webmaster Tools
The Google Webmaster Tools provides site owners and marketing agencies with valuable and actionable information to help improve the functionality of websites and at the same time improve the chances of having improved organic search engine rankings. According to Google, the purpose of the Webmaster Tool is to “help provide the data, tools, and diagnostics for a healthy, Google-friendly site.”
Bing Webmaster Tools
Bing Webmaster tools provides almost identical functionality as does Google Webmaster Tools.
Miscellaneous SEO Issues & Recommendations
There are several further aspects to consider when wondering about what might be hampering your rankings on search engines, or about how to improve SEO. The age of the domain, the time when the domain is renewed, even the domain length and structure all are taken into consideration by search engines. Though these aspects have only have a minimal effect on the ranking, they are important recommendations to be aware of nonetheless.
Age of Domain
This refers to the length of time that a domain has existed, by taking into consideration the age of a website, and how long the domain has been registered and indexed by Google.
Domain age does come into play to determine Google ranking, and the older the page, the more advantage it receives. This advantage does not necessarily last a long time, but the material point is that a brand new domain (all else being equal) will not rank as highly as one that has existed for a year or more.
Factoring in expiration dates is a potential way to strength rankings on Google. If you renew your domain one year at a time, or renew only when the domain is about to expire, it communicates a signal to Google that the site is not a priority. This kind of information could potentially weaken rankings and is not advisable.
By registering a domain for a number of years at once, or at least a good distance from the expiration date (ie., within 12-24 months from the expiration), it indicates to search engines that that you are committed to sustaining the site activity and development for a substantial amount of time.
Domain Length & Structure
Short domain names and URLs are preferable over longer ones, both for usability and SEO purposes. As an example, it would be more advantageous to have the domain www.example.com as opposed to www.theexamplewidget.com.
Also, exact keyword match domains can offer great benefits. For example, if a website is about bloodhounds, it would be best to have the domain www.bloodhounds.com. This would benefit the site in three ways. First, it will make remembering the domain very easy for visitors who want to come back. Second, the domain will enjoy a fair amount of type-in traffic. Third, when linking to the website, others tend to use the URL or the name of the site in the link text, and having an exact match keyword be the URL of a website certainly won’t hurt.
Having an exact keyword match domain used to be a significant factor in rankings; however, nowadays it only offers indirect ranking benefits (as noted above).
It is also important to remember, when focusing on the structure of your website, that value within a site is based on content and incoming links. URLs do play a role, but the strength of links, and content on your site will be what drives potential for possibly ranking higher on search engines.
Contact Us now for Search Engine Optimizer advice.