Get in touch with our team

27.04.2018

2 min read

BrightonSEO 2018: Fili Wise – Optimising for SearchBot

This article was updated on: 07.02.2022

A core focus for technical SEO is how well Google can crawl our website, and whether, indeed, it can access everything it should. As we constantly evaluate our site’s performances, we are trying to influence and control an output (SERPs visibility). We can’t do this unless we control the input and in this case, the input is how our websites are built and structured.

So, how do we analyse how well Google (Searchbot) can crawl our site, and how it understands it? Whilst there are now hundreds of tools and pieces of software that can be used, Search Console still remains the best tool for truly accurate analysing, said Fili Wiese in his Brighton SEO talk.

How Google crawls the web

We know that the Google bot is extremely conservative and it does not want to crash your website, so as soon as it slows it down, it will exit. Our objective is to maintain a website that is as efficient as possible to crawl so that as much of our site is crawled as regularly as possible.

Fili referred to Lighthouse as a Google tool that enables you to crawl a site fully and understand performance and inefficiencies. He said Google can crawl JavaScript but it’s still not great. The future looks promising but we can’t depend on it.

In this order, crawlers look to understand

  • HTTP
  • HTTP & CSS
  • HTTP & CSS & JavaScript

Google starts its crawl from the top down, but then it will move to a random crawl.

Links play a key part in Google’s crawl process. Why does Google like links?

  • It helps with the crawl budget and suggests which pages need crawling to discover new content
  • An internal linking structure is also important for crawl efficiency to help steer the bot to key pages throughout your site
  • Breadcrumbs allow Google to crawl more efficient

The next few months, Chrome will be moving to HTTPS fully. If you are going through a migration, it is recommended to improve your current sitemap with regard to internal linking and hierarchical structure.

Mobile first

Google has a mobile specific crawler. With the introduction of AMP, do we want to have two code bases? – Responsive and AMP or should we canonicalise to one? The choice is ours but it is a consideration that needs to be made.

Key Takeaways

  • Mobile first is key
  • Optimise, optimise, optimise
  • Audit regularly