Using Python + Streamlit To Find Striking Distance Keyword Opportunities – Search Engine Journal

Learn how to use a Python script + Streamlit app to identify striking distance keyword opportunities – no coding knowledge required!
Python is an excellent tool to automate repetitive tasks as well as gain additional insights into data.
In this article, you’ll learn how to build a tool to check which keywords are close to ranking in positions one to three and advises whether there is an opportunity to naturally work those keywords into the page.
It’s perfect for Python beginners and pros alike and is a great introduction to using Python for SEO.
If you’d just like to get stuck in there’s a handy Streamlit app available for the code. This is simple to use and requires no coding experience.
There’s also a Google Colaboratory Sheet if you’d like to poke around with the code. If you can crawl a website, you can use this script!
Here’s an example of what we’ll be making today:
These keywords are found in the page title and H1, but not in the copy. Adding these keywords naturally to the existing copy would be an easy way to increase relevancy for these keywords.
By taking the hint from search engines and naturally including any missing keywords a site already ranks for, we increase the confidence of search engines to rank those keywords higher in the SERPs.
This report can be created manually, but it’s pretty time-consuming.
So, we’re going to automate the process using a Python SEO script.
This is a sample of what the final output will look like after running the report:
The final output takes the top five opportunities by search volume for each page and neatly lays each one horizontally along with the estimated search volume.
It also shows the total search volume of all keywords a page has within striking distance, as well as the total number of keywords within reach.
The top five keywords by search volume are then checked to see if they are found in the title, H1, or copy, then flagged TRUE or FALSE.
This is great for finding quick wins! Just add the missing keyword naturally into the page copy, title, or H1.
The setup is fairly straightforward. We just need a crawl of the site (ideally with a custom extraction for the copy you’d like to check), and an exported file of all keywords a site ranks for.
This post will walk you through the setup, the code, and will link to a Google Colaboratory sheet if you just want to get stuck in without coding it yourself.
To get started you will need:
We’ve named this the Striking Distance Report as it flags keywords that are easily within striking distance.
(We have defined striking distance as keywords that rank in positions four to 20, but have made this a configurable option in case you would like to define your own parameters.)
I’ve opted to use Screaming Frog to get the initial crawl. Any crawler will work, so long as the CSV export uses the same column names or they’re renamed to match.
The script expects to find the following columns in the crawl CSV export:
The first thing to do is to head over to the main configuration settings within Screaming Frog:
Configuration > Spider > Crawl
The main settings to use are:
Crawl Internal Links, Canonicals, and the Pagination (Rel Next/Prev) setting.
(The script will work with everything else selected, but the crawl will take longer to complete!)
Next, it’s on to the Extraction tab.
Configuration > Spider > Extraction
At a bare minimum, we need to extract the page title, H1, and calculate whether the page is indexable as shown below.
Indexability is useful because it’s an easy way for the script to identify which URLs to drop in one go, leaving only keywords that are eligible to rank in the SERPs.
If the script cannot find the indexability column, it’ll still work as normal but won’t differentiate between pages that can and cannot rank.
In order to check whether a keyword is found within the page copy, we need to set a custom extractor in Screaming Frog.
Configuration > Custom > Extraction
Name the extractor “Copy” as seen below.
Important: The script expects the extractor to be named “Copy” as above, so please double check!
Lastly, make sure Extract Text is selected to export the copy as text, rather than HTML.
There are many guides on using custom extractors online if you need help setting one up, so I won’t go over it again here.
Once the extraction has been set it’s time to crawl the site and export the HTML file in CSV format.
Exporting the CSV file is as easy as changing the drop-down menu displayed underneath Internal to HTML and pressing the Export button.
Internal > HTML > Export
After clicking Export, It’s important to make sure the type is set to CSV format.
The export screen should look like the below:
Tip 1: Filtering Out Pagination Pages
I recommend filtering out pagination pages from your crawl either by selecting Respect Next/Prev under the Advanced settings (or just deleting them from the CSV file, if you prefer).
Tip 2: Saving The Crawl Settings
Once you have set the crawl up, it’s worth just saving the crawl settings (which will also remember the custom extraction).
This will save a lot of time if you want to use the script again in the future.
File > Configuration > Save As
Once we have the crawl file, the next step is to load your favorite keyword research tool and export all of the keywords a site ranks for.
The goal here is to export all the keywords a site ranks for, filtering out branded keywords and any which triggered as a sitelink or image.
For this example, I’m using the Organic Keyword Report in Ahrefs, but it will work just as well with Semrush if that’s your preferred tool.
In Ahrefs, enter the domain you’d like to check in Site Explorer and choose Organic Keywords.
Site Explorer > Organic Keywords
This will bring up all keywords the site is ranking for.
The next step is to filter out any keywords triggered as a sitelink or an image pack.
The reason we need to filter out sitelinks is that they have no influence on the parent URL ranking. This is because only the parent page technically ranks for the keyword, not the sitelink URLs displayed under it.
Filtering out sitelinks will ensure that we are optimizing the correct page.
Here’s how to do it in Ahrefs.
Lastly, I recommend filtering out any branded keywords. You can do this by filtering the CSV output directly, or by pre-filtering in the keyword tool of your choice before the export.
Finally, when exporting make sure to choose Full Export and the UTF-8 format as shown below.
By default, the script works with Ahrefs (v1/v2) and Semrush keyword exports. It can work with any keyword CSV file as long as the column names the script expects are present.
The following instructions pertain to running a Google Colaboratory sheet to execute the code.
There is now a simpler option for those that prefer it in the form of a Streamlit app. Simply follow the instructions provided to upload your crawl and keyword file.
Now that we have our exported files, all that’s left to be done is to upload them to the Google Colaboratory sheet for processing.
Select Runtime > Run all from the top navigation to run all cells in the sheet.
The script will prompt you to upload the keyword CSV from Ahrefs or Semrush first and the crawl file afterward.
That’s it! The script will automatically download an actionable CSV file you can use to optimize your site.
Once you’re familiar with the whole process, using the script is really straightforward.
If you’re learning Python for SEO and interested in what the code is doing to produce the report, stick around for the code walkthrough!
Let’s install pandas to get the ball rolling.
Next, we need to import the required modules.
Now it’s time to set the variables.
The script considers any keywords between positions four and 20 as within striking distance.
Changing the variables here will let you define your own range if desired. It’s worth experimenting with the settings to get the best possible output for your needs.
The next step is to read in the list of keywords from the CSV file.
It is set up to accept an Ahrefs report (V1 and V2) as well as a Semrush export.
This code reads in the CSV file into a Pandas DataFrame.
If everything went to plan, you’ll see a preview of the DataFrame created from the keyword CSV export. 
Once the keywords have been imported, it’s time to upload the crawl file.
Once the CSV file has finished uploading, you’ll see a preview of the DataFrame.
The next step is to rename the column names to ensure standardization between the most common types of file exports.
Essentially, we’re getting the keyword DataFrame into a good state and filtering using cutoffs defined by the variables.
Next, we need to clean and standardize the crawl data.
Essentially, we use reindex to only keep the “Address,” “Indexability,” “Page Title,” “H1-1,” and “Copy 1” columns, discarding the rest.
We use the handy “Indexability” column to only keep rows that are indexable. This will drop canonicalized URLs, redirects, and so on. I recommend enabling this option in the crawl.
Lastly, we standardize the column names so they’re a little nicer to work with.
As we approach the final output, it’s necessary to group our keywords together to calculate the total opportunity for each page.
Here, we’re calculating how many keywords are within striking distance for each page, along with the combined search volume.
Once complete, you’ll see a preview of the DataFrame.
We use the grouped data as the basis for the final output. We use Pandas.unstack to reshape the DataFrame to display the keywords in the style of a GrepWords export.
Lastly, we set the final column order and merge in the original keyword data.
There are a lot of columns to sort and create!
This code merges the keyword volume data back into the DataFrame. It’s more or less the equivalent of an Excel VLOOKUP function.
The data requires additional cleaning to populate empty values, (NaNs), as empty strings. This improves the readability of the final output by creating blank cells, instead of cells populated with NaN string values.
Next, we convert the columns to lowercase so that they match when checking whether a target keyword is featured in a specific column.
This code checks if the target keyword is found in the page title/H1 or copy.
It’ll flag true or false depending on whether a keyword was found within the on-page elements.
This will delete true/false values when there is no keyword adjacent.
This configurable option is really useful for reducing the amount of QA time required for the final output by dropping the keyword opportunity from the final output if it is found in all three columns.
The last step is to download the CSV file and start the optimization process.
If you are looking for quick wins for any website, the striking distance report is a really easy way to find them.
Don’t let the number of steps fool you. It’s not as complex as it seems. It’s as simple as uploading a crawl and keyword export to the supplied Google Colab sheet or using the Streamlit app.
The results are definitely worth it!
More Resources:
Featured Image: aurielaki/Shutterstock
Get our daily newsletter from SEJ’s Founder Loren Baker about the latest news in the industry!
Lee Foot is the founder and director of Search Solved, an SEO agency specialising in enterprise and eCommerce SEO. He … [Read full bio]
Subscribe to our daily newsletter to get the latest industry news.
Subscribe to our daily newsletter to get the latest industry news.

source

Posted in Blog
Write a comment
Contact Us
Phone
WhatsApp
WhatsApp
Phone
Contact Us