Crawl of Subfolders

Find out how to analyse a specific subfolder for a granular and efficient Seo Audit focus.

Crawl Subfolder

By default the Seo Spider performs a full scan of the “root domain” or subdomain but it may happen due to time requirements or to meet a specific focus that you need to do only a partial scan of a website.
Let’s see together how to get a crawl of a specific subfolder.

You want to analyze the errors and critical issues present only in the blog area of the Screaming Frog website:

For this partial crawl, simply enter the URL of the subfolder, start the Seo Spider, and let the crawl conclude. In this way you will find the main window populated with subfolder URLs, respective filters and metrics.

As you will see during your tests, in addition to the URLs in the “subfolder,” there will also be additional URLs that do not belong to the area you had restricted.

However, this anomalous behavior is not a bug in the system but represents an inherent feature of Seo Spider.

In fact, the crawler considers the subdirectory in the first instance but also contextually follows all the links present by proposing the data of the linked URLs.

This expedient is vital for more accurate analysis and helps you discover any broken links starting from the subdirectory to external URLs.

Configuration > Spider > Crawl Behaviour > Check Links Outside of Start Folder

Please note: Seo Spider is only able to scan a subfolder with this syntax “../foldername/“; if you forget the final “/” Screaming Frog will not recognize it. The same situation occurs when the version with a final “/” has a redirect to the version without.

Another alternative for scanning a subfolder is to use the “include” function by taking advantage of the RegEX rules.

Configuration > Includes >.*/blog.*

Seo Spider Tab