FarzinBy Farzin Andrew Espahani | Solution21 | VP of Growth and Marketing

14 Technical SEO Problems & How to Fix Them

When it comes to website design and marketing, one of the most overlooked aspects is that of technical SEO. SEO, or search engine optimization, is one of the most low-effort, high-reward projects that all industries should be paying close attention to. Tackling SEO concerns can dramatically improve traffic to your site and in turn, convert them to paying customers for your products or services.

Most issues related to SEO are easily identifiable and incredibly straight-forward to fix. A simple concern can be caught in an afternoon and, once fixed, solve a problem that has affected a month’s worth of traffic being redirected away from the site. There are so many aspects of SEO that you can look into on your own website and quickly fix to make a noticeable change in your online web presence.

But first, it is essential that you understand how technical SEO works. SEO tends to gravitate towards the content on a site and how it helps ranking on search engines, but technical SEO includes specific aspects of a website that are more difficult to uncover for the average market. These are sometimes more site-wide issues versus page issues. Fixing these issues can improve the overall site and not just certain pages. These concerns are not common knowledge, but these simple oversights can negatively impact the traffic reaching your website.

Today we will analyze 14 specific technical SEO problems that affect many websites and help you uncover them and address them. This will be an incredibly valuable read for those with beginning or intermediate knowledge of SEO. While it is not a complete audit checklist for technical SEO, it can summarize the most common and the most damaging technical SEO concerns that can be fixed immediately. Looking at these issues right now can actually greatly improve your web presence and save thousands of dollars in services or lost sales. Let’s get started!

  1. Indexation

If you’ve ever noticed that you don’t rank in common search engines for your own brand name, you’re not alone. This is due to indexing and ranking. To determine how your site is performing, type your website name into the Google search bar. For example, if your website is goodsmellingshampoo.com, type that into the Google search bar to see how many of your site’s actual pages are ranking. Are the results what you expected? Are you seeing pages that are indexing that you don’t want indexed, or are you missing pages in the index that you specifically want to rank? If an issue is arising, here are a few things to do:

  • Dig a little deeper and start checking specific pages on your site such as blog postings and product pages.
  • Run down your list of subdomains to see if they are indexing.
  • If you have older versions of your site, make sure they are being redirected and not indexed.
  • Run results to see if there have been any indications of hacking or spamming on your website.
  • Determine what is causing indexing concerns.

  1. Robots.txt

Not everyone realizes the importance of the robots.txt file. This file alone can dramatically destroy your website’s organic traffic. It should be changed after a redevelopment of a website. Many SEO companies will find that entire sites are blocked from indexing because of a bad robots.txt file. The first step to determine if your robots.txt file is faulty is to type your website in with the following addition: goodsmellingshampoo.com/robots.txt. If you get a result that says “User-agent: * Disallow: /” then there’s a problem. Talk to your website developer to see if there is a specific reason why the site is set up that way. If there is a complete robots.txt file, consider reviewing it line-by-line with your website developer to ensure it is functioning at its best.

  1. NOINDEX

The second most damaging element to a website is meta robots NOINDEX. NOINDEX removes all pages with a specific configuration. It is often set up when a website is being developed. A quality developer will ensure this is removed from your site when it goes live, but it doesn’t hurt to do a little manual research on your own. To do this, view the source code of a specific page on your website, and look for lines that may state <META NAME=”ROBOTS” CONTENT=”NOINDEX, NOFOLLOW”> If you see this, take action. If you see “INDEX, FOLLOW” or nothing at all, you’re okay. A way to do this quickly and effectively is with tools such as Screaming Frog which can scan all your website pages at once to check for NOINDEX. Another option is to regularly crawl your site with SEO auditor software tools such as the Moz Pro Site Crawl.

  1. URL Canonicalization

The average person really isn’t too concerned how their website shows up, whether it shows up as goodsmellingshampoo.com, http://www.goodsmellingshampoo.com, or goodsmellingshampoo.com/home.html. However, the search engines are, and configuration can make it more difficult for your website to have a presence in search engine results. Sometimes Google will index a few different types of canonicalization which can cause complex issues and overall confusion when it comes to seeing results on search engines. When Google breaks down websites with different canonicalization, it can cause multiple pages to result in the same content but different URLs and ultimately split up page popularity. How can you find out if this is happening to your website?

  • Enter in different versions of your website page manually to see if they revert to the same URL.
  • Look for HTTP and HTTPS versions – you should only have one available.
  • If the above concerns are present, ask your website developer to set up 301 redirects to resolve.
  • Use “site:” in the Google search bar to find out which versions of your pages are indexing.

A great way to monitor these issues is by scanning the website all at once with tools such as Screaming Frog, and to set up URL canonicalization schedules to check weekly or monthly to monitor any changes.

  1. Rel-canonical

The rel-canonical tag, while similar to the canonicalization we talk about above, is different but still important to monitor. This tag can prevent page duplication that can be used to fix issues with many ecommerce websites when it comes to managing filters and categories across different pages. A perfect example of how rel-canonical URLS work is to take a look at Shopify and how this platform manages their product URLS and how they relate to categories. This particular site handles rel=canonical properly and defaults the same across all the Shopify websites for easier product navigation. You will want to take a look at your site and see how the rel=canonical tag is working for or against you.

  • Check specific pages on your site to see if they are utilizing the rel=canonical tag appropriately.
  • Check with a site scanning software program to list out all of the URLs used on your website and spot out duplicate page issues that may be resolved easily with a rel=canonical tag.

  1. Text used in images

While text in images may seem like an easy concept to understand, important content is easily hidden behind images. Google, at times, can understand text used in images, but the technology is not as sophisticated as we would have expected in the year 2017. SEO best practices tell us not to include text in images. This is continually confirmed by using searches in Google and trying to find images with this text through the image search query. It is often unsuccessful. H1 tags are still common practice to ensure specific images and sites rank on Google and other search engines as desired. We recommend you manually inspect the pages that are most important on your site and use a special SEO site crawler to look for the H1 and H2 tags prominent across the website. Once this has been done, notify developers and content managers on the best practices of using text in images, and redesign the site to ensure that your team is using CSS instead of more commonly used image overlays to achieve better results.

  1. Broken backlinks

Link equity is incredibly important when it comes to managing a website. When websites go through a relaunch project or a complete migration, an SEO professional should be able to spot broken backlinks from previous website and address them quickly to ensure they do not negatively impact the site’s search engine results.

After a migration, some important pages may quickly become 404 pages, which means any site backlinking to these pages will become broken. By using tools such as Google Search Console, Majestic, Ahrefs, or Moz, website owners can quickly find broken backlinks. Search Console allows you to see the top 404 errors presenting on your website, and a backlink checker will further this discovery.

The first step is to identify backlinked pages that appear dead and broken and work with your developer to use 301 redirect to these specific pages. Another source of linking opportunities can be found by looking for broken links caused by the URL being messed up in the site’s code—sometimes it’s just an incorrectly typed URL that can result in this problem. A great way to monitor these issues is by setting up a recurring website crawl to check for new broken backlinks, or using tools like Google Alerts or Mention to monitor as well.

  1. HTTPS

HTTPS, or a “secure website,” is becoming more desirable and actually a necessity for many ecommerce websites. Google announced that in January of 2017, it would begin marking a regular HTTP ecommerce site that accepts credit cards and passwords as “non-secure.” This label can greatly impact traffic to your site via a search engine, and may indicate that there is an algorithmic ranking benefit for HTTPS sites over traditional HTTP. This issue actually overlaps with IT, conversion rate optimization, and web development but requires a website owner to take action to avoid a problem. Contact your developer to make sure your site deploys HTTPS, and if it is not, work with a team to enlist an SEO migration to an HTTPS site to start the transition.

  1. 301 and 302 redirects

When it comes to management and control of dead pages and broken backlinks, redirects can be an amazing tool. A 301 redirect is a permanent redirect and a 302 redirect is a temporary redirect. It is considered best practice to use a 301 when a website needs to permanently redirect a webpage. 301 redirects should not be used for all 404 errors, and should not always be used in place of the rel=canonical tag. It is also recommended that website owners do not redirect all URLs from their previous site to the new website’s homepage. This is a bad idea and does not benefit a company’s SEO efforts.

If used correctly, 301 and 302 redirects can be a Godsend. It is essential for a development team to have someone who can use these redirects properly. If not, it can greatly affect revenue for months on an ecommerce site if they are hurting more than helping. 301 redirects are considered the gold standard, and should be used in more situations than 302 redirects. To fix these concerns, review URLs on your site and change any 302 redirects to 301 redirects. Speak with your development team if there is an excessive amount of 302 redirects being used to see if there is a specific reason why this is being done. Encourage the development team to perform regular site scans to check for redirect conflicts.

  1. Meta Refresh

Meta refreshes are a thing of the past. It is not recommended to have them implemented at all on a website. Instead, it is better to turn them off and utilizes a 301 redirect instead. Meta refresh is not recommended by any professional SEO company or even Google for that matter. It is easy to find in source code, just look for lines such as <meta http-equiv=”refresh”> in your current source code. Use a site crawler like Screaming Frog to determine if this tag is found in your source code, and if it is, speak to your development team about using 301 redirects instead unless there is a good reason for the use of metal refreshes.

  1. XML Sitemaps

Google and search engine spiders crawl websites and look specifically for the XML sitemap to do so. These have a dramatic impact for complex, large sites to give proper direction to crawlers for appropriate indexing. When the website’s pages are linked properly, web crawlers can uncover most of the pages of a website. Regardless, an XML sitemap can improve the crawling ability of a site, specifically for very large sites, new sites, or sites that have a large archive of content pages. Problems often seen with XML sitemaps include the lack of one, no mention of the sitemap in the robots.txt file, avoiding the use of XML sitemaps for large sites, and allowing multiple or old versions to exist. Use Google Search Console to determine the quality of your XML sitemap and URL links, and if there is a concern, monitor the indexation of the URLs submitted in the XML sitemap within Search Console regularly. As a site becomes larger and more complex, work with your developer to ensure the XML sitemap works to your advantage when it comes to indexing and spider crawls with Google. Pay close attention to caps such as the limits set by Google, which cuts off a sitemap at 50,000 URLs and 10MB.

  1. Page Size and Unnatural Word Counts

A common issue that some websites are experiencing is an inflated word count. While the content on the pages themselves may be less than several hundred words, scanning tools like Screaming Frog may be reporting them as having thousands of words. Upon further inspection, it is often found that the CSS style has Terms and Conditions or other text that is supposed to be displayed but is coded in the site as “Display: none.” This greatly affects the download speed of a page and may also result in penalties through indexing search engines as they may see this as intentional clocking. This can also indicate issues such as “code bloat” including inline CSS and Javascript. A development team should pay close attention to these issues and take action in fixing them as soon as possible. This hidden text can quickly cause your website to trip algorithmic penalties with large search engines, negatively reflecting on your website rankings.

  1. Load Speed

Speed is key when it comes to webpage load times. A website that doesn’t load within seconds may be abandoned quickly by a visitor. Speed is also part of the Google algorithm, so it can be negatively impacting your rankings for SEO marketing purposes. This includes the load times for mobile platforms which are growing exponentially as internet users are often on-the-go while looking for specific information on your website. Resources for identifying and fixing speed issues on a website are indispensable. With the use of auditing and SEO tools, test your site and page speeds and work with your developer to ensure your site is running as fast as it can. Push for speed increases across the board as seconds can cause you lose potential customers.

  1. Internal Linking Structure

The internal linking structure is essential when it comes to the ability of search spiders to effectively crawl your site. This aspect of web development is most critical for sites that are incredibly large and have isolated pages that are not quickly found with a few clicks from the main homepage. A simple site on a more standard platform such as WordPress is less likely to experience issues with the internal linking structure. It all comes down to navigation on the website. Focusing on internal links can greatly push some websites higher in search engine rankings. To find these issues, manually move about your website by clicking on links and in-content posts to see how long it takes you to reach the most important pages. This may indicate that your site needs a more rock-solid architecture to improve the effectiveness of internal links. Consider the flow and navigation of your site as you create new pages for your site, and reiterate with your development team the importance of implementing links effectively using this method.

In conclusion, it is important that website owners play an active role in monitoring their site’s performance outside of what their developer performs. Technical SEO problems can be easily found and easily fixed. Never assume that your website developer is handling these as they may be focused on building your website without paying close attention to all the technical details that can make or break your search engine rankings. Properly managing technical SEO can have a dramatic impact on how your site is found and discovered by the average internet user. It is our hope that you utilize some of these tips and techniques to start fixing technical SEO concerns on your own site to benefit your company as a whole.

Source: Moz