URL Inspection API

Find out what the URL Inspection API is for increasingly efficient and granular reports.

URL inspection API analysis

For those of us who love Seo and the Google world like me, January 31, 2022 will be marked on the calendar: the Search Engine has finally released its URL Inspection APIs that allow us Seo invaders to indulge in designing new, increasingly timely and interesting reports both for our own graceful internal analysis and for presenting data to clients. After reading the Google developer forum I came across the blog of my colleague Lazarina Stoy who not only thought to report the news but created a comprehensive template for Google data Studio that takes the data from the Search Console and represents it best.

Overview API inspection of URLs

The URL inspection API uses a method called “index.inspect” that allows you to view the status of a URL and define whether it is “indexed” or “not indexed.” To get these results you can make a request by putting in two parameters: first indicating the URL you want to scan and second the URL of the property defined in Search Console.

The structure required is as follows:

JSON representation:

Copy to Clipboard

At the time of the article, version testing in the Google index is possible, and it is not possible to test the indexability of a Live URL also Google has set maximum temporal analysis quotas: 2,000 queries per day and 600 per minute.

API URL Inspection and Screaming Frog

As you already know Screaming Frog can be connected via API to Google Search Console and this allows you to process data from this new release in a simple but very powerful way for increasingly advanced analysis presented in a simple and intuitive way.

Configuration > API > Google Search Console API Tab > Enable URL Inspection

From Screaming Frog to Data Studio

Once you get the data you need with the Seo Spider you can use Lazarina Stoy’s template (Template Data Studio..I put the link at the bottom of the article) as your dashboard, go in on your property, make a copy, and use the data pulled from Screaming Frog as the source for your newly created template.


source: lazarinastoy

*note the template will only work correctly if you have set up the English version of Screaming Frog otherwise the column names in the exported file will be different and the data will not be returned by Google Data Studio! I will soon translate the template for the Italian version of Seo Spider!

Like all templates for Data Studio there may be glitches related to cloning of fields, filters, and filter-specific applications. To get a handle on any critical issues I will point out below all the elements that make up the template so that you can “fix” any bugs.

Custom Field: Canonical Mismatch? (Y/N)

Custom Field: Page Section

Custom Field: Pages with and without clicks

Custom Field: Scanning Frequency

The filters included in the template are as follows:

  • Exclusion of pages without mobile usability errors.
  • Exclusion of pages with 0 clicks.
  • Exclusion of URLs without summary.
  • Exclusion of URLs with “null” status.
  • Pages with crawling not allowed.
  • Page retrieval failed.
  • Status code corresponding to 404.

Data on GDS

Before we dive into the analysis of the data you will find in the template, I would like to make a few remarks about the data itself. Remember well that the status “the URL is on Google” is not the same as the page actually appearing in search results; actual appearance in Serp requires that the page and data conform to quality and safety guidelines and guidelines. In addition, the URL inspection tool does not consider manual actions, content removals, and temporarily blocked URLs. So it might happen that the URL is not listed on Google but the tool still indicates its indexing on the Search Engine. Only an ad hoc analysis will reveal to you the reason for this discrepancy.

GDS template section

In the “Coverage Overview” section, the following data can be observed:

  • how many pages were scanned, as well as the distribution of pages by section;
  • how many pages have a coverage state according to the inspection API, including the distribution of pages by state;
  • How many pages get clicks;
  • How many pages have mobile usability problems and the distribution of problems;
  • How many pages have “rich snippet” problems and the distribution of errors;
  • a summary of theIndex Status distribution (options are: URL is on Google, URL is on Google but has problems, and URL is not on Google);
  • a summary of User Agent deployment-is your site scanned primarily by a mobile, desktop, or unknown agent?
  • A summary of the distribution of the “Page Fetch” – were the pages included in the scan successfully retrieved by Google?
  • A summary on the URL Inspection Status – were there any errors?

In theMobile usability section you can get:

  • an overview of whether your pages are mobile friendly or have errors;
  • Get an overview of the main types of errors found on the site, and how many pages are causing them;
  • Filter URLs based on error type, indexability, and Google index status.

In the “Rich Snippet Issues” section you can:

  • Get an overview of the main errors related to rich snippets;
  • Get an overview of the number of pages that are valid for structured data;
  • See the main schema types implemented;
  • filter and explore URLs, based on indexability status, clicks, and Google index status to help prioritize fixes.

In the Google Status section (page 2 of the template):

  • Filter and explore URLs, based on URL inspection API status, summary, pages with/without clicks, and coverage;
  • Get an overview of the relationship between indexability and status code – do all pages you declared indexable return a status code 200?
  • Get an overview of the relationship between Google index status and clicks-do problem pages get clicks? Prioritize their optimizations.
  • Get an overview of the relationship between inspection status and error breakdown-when an inspection error is received, what is the reason for it? Reading timeout, connection time?

In the “Crawling” section you can get summary statistics on:

  • Pages where crawling is not allowed;
  • The pages where indexing is not allowed;
  • The URLs where “fetching” pages is not allowed;
  • 404 status code pages.
  • Filter URLs by crawl agent (mobile, desktop, unknown);
  • Filter based on the last date of extraction using the date range filter;
  • Filter based on the fetch status of the page;
  • Filter by status code;
  • Get an overview of crawling behavior on the site-how frequently are URLs crawled?

In the “CanonicalTags” section, you can check for discrepancies between the canonical tags you have indicated and those selected by Google. You can also supplement these dimensions with additional filters to uncover any error patterns.

Practical Uses – Lazarina Stoy Template

We conclude the analysis of this template with some practical cases in which to use the template.

  • For which pages does Google not use the canonicals you listed on the website?

Go to the second page of the template in the “Canonical tags” section and filter the table by “Canonical Mismatch (Y/N) using the “YES” option. You can immediately review all inconsistencies.

Which indexed pages that get clicks have not been scanned by the Bot lately?

Val to page 2 and select from the Google Status section, the Pages with/without clicks filter by setting it to show only pages with clicks.

Then scroll down to the next section, and use the date bar in order to adjust the time interval you want to review. This way you can see the result in the table below with the URLs, Last Crawl (Date), and number of clicks.

Which click-generating URLs have problems on mobile usability or in structured data?

Go to page 1, in the Mobile Usability Issues and Rich Result Snippet Issues sections, and use the indexing filters to answer these questions.

Download the template for Google data Studio


Seo Spider Tab