Screaming Frog in Team

Find out how to share Screaming Frog data with your team

Screaming Frog: Teamwork

Collaboration is often an essential part of achieving your goals, especially when it comes to SEO. Because of the nature of the work, you often find yourself interacting with different areas of a company, such as developers, designers, and copywriters, to share knowledge and coordinate the project.

Although Screaming Frog is a local website crawler there are several ways to work collaboratively and make sure everyone is on the same page and up-to-date on the different phases of the commissioned project.

Data export

One of the most widely used features is the ability to export data to spreadsheets. This is useful if you want to share data with someone who doesn’t use SEO Spider, or if you don’t want to send a full scan file.

You will simply click on the ‘export’ button in the upper left corner to export data from the tabs and filters in the main window.

For the lower window data, it will be sufficient to select the desired items from the main screen, right-click, and choose which data to export.

The third option is to use the “Bulk Export” from the main menu.

Export Crawl

My preferred method is to export the full “Crawl data” which allows me to share the status of the work with colleagues while also showing them all the configurations implemented to obtain certain data from the crawl.

This opportunity for collaboration has huge advantages from a technical point of view by allowing you to be able to see every aspect of the SEO Audit in detail, but a huge disadvantage: your interlocutor must be familiar with the SEO Spider, and this condition does not always occur in Web Agencies. Certainly, compared to its predecessor, it more easily allows for moving a larger amount of data and not granular report by report butexpertise becomes a prerequisite.

In the “Memory Storage” storage mode (not recommended) you will be able to save the “crawl data” by selecting ‘File > Save/Save as…’ and then resume it again from the menu using the “File > Open” items.

In the case of the “Database Storage” the crawl data are automatically saved and you can retrieve them from the “File > Crawl” menu; To use the crawl a necessity will be sufficient to click on “Import.”

Alternatively, if you do not want to pass the “Crawl data” it will be sufficient to send the configuration used for your scan so that your colleague will be autonomous to process a scan using the same settings as you. In order to do so, it will be sufficient to click on “File >Configuration >Save As.” Learn how to manage Screaming Frog configuration profiles.

Those who receive the configuration can recall it before scanning from “File > Config> Load.”

99% of the time the results will be identical but if some time passes between scans there may be slight differences in the results such as temporary server errors or other.

Export Data with Google Drive

Google Drive represents an important opportunity to collaborate with Screaming Frog. After connecting Google Drive to the SEO Spider mainly the options are two:

  1. Export granular data: export the data as the first proposed solution by choosing Google Drive precisely as “Save Location” and using “.gsheet” as output.
  2. Use the
    Scheduled Crawls
    (recommended): by taking advantage of “Scheduling Crawls” and setting from the “Export” Tab the Google Drive options you can share all files automatically. Through this option you will also be able to export your data directly into Looker Studio (great option for sharing analysis data with colleagues or the customer)

Export Data on “Shared Location”

Similar to exporting directly to Google Sheets, you can of course export to a shared network drive such as Dropbox or OneDrive. It is as simple as selecting the desired network drive when saving explorations, configuration files, exports, and more.

If you are using the database storage mode, using a network drive as the location for the database is not supported. This is because it would be too slow and the connection unreliable.

Screaming Frog on the Cloud

As you already know, the SEO Spider requires your device to be turned on at the time it should process a scheduled crawl. However, through CLOUD you can overlook this scenario. You can quickly boot virtual machines for daily tasks, run simultaneous scans, and use Compute Engine’s advanced scheduling capabilities to automate without blocking local resources.

In addition, running it in the cloud allows you or your colleagues to access ongoing or completed Crawl Data from anywhere, so you can easily get the information you need. See: Screaming Frog and Cloud.

Seo Spider Tab