Crawl Summary Data
The summary section of the summary tab in our SEO software provides a basic overview of the website crawl data that we scrape and analysed. This provides a very basic breakdown of the crawl data, other sections within the reporting page will provide a look at different segments of SEO data.
The explainer video below shows what this section covers in terms of data and provides instructions on the functionality available within this part of the tool.
Visual Error Signalling
The coloured coded thermometers on the left-hand side of each metric indicate whether something is an issue, a potential issue or is all good. You can use this is a visual aid to help draw your attention to where it is most needed.
The screenshot below shows how where these appear and how they can draw your attention to where its needed. For example, the green thermometers indicate that there is no issue at all, whereas the blue ones indicate that there may be an issue.
In the case of the ‘non-indexable URLs’ these may be intentional and all good, but the blue error indicator lets you know you might want to analyse that data a little further.
One of the features available within this tab is the ability to select crawl dates, you can see in the screenshot below that there are two drop-down menus. Select the crawl date from these drop-down menus and the data in the table will update to show you the data for those crawls.
This feature allows you to compare the summary data within any section of this page, but in the context of the crawl data summary you can see how many URLs have changed between crawls for example. The screenshot below shows how you can compare these two data-sets, the far right column shows the difference between the data in the first two columns.
The crawl summary section has the following data contained within it; we cover each of these in other guides in much more detail:
No. of URLs Crawled
This shows the number of URLs that were crawled by our web crawler during the latest and previous crawl. This number can differ to the number found under certain conditions. If a crawl was passed, interrupted or the URL limit was reached for your bulling cycle this could result in fewer URLs being crawled than found.
Various settings can also prevent URL from being crawled once identified, but typically these number will match.
These are standard web pages and we segment these out so that you can see how may of the URLs crawled are actual web pages.
These can include images, CSS files and a range of other files.
These are URLs that do not have either a noindex tab on them if they are a HTML Page and are not disallowed from the robots.txt file.
These URLs are indexable by Google and will not appear in the search results.
Status 200 Pages
These are URLs or web pages that are accessible and present no issues with either humans or robots accessing the.
Non-Status 200 Pages
These pages are not accessible, and this may be due to them being broken or redirected.
We love data and so we love talking about it, but if you are unsure about anything we show or discuss, here, please get in touch a email@example.com