Ahrefs company logo

The profile is currenly unclaimed by the seller. All information is provided by CB Insights.




About Ahrefs

Ahrefs offers an SEO tool to help its customers improve their search engine optimization (SEO) results by offering keyword generation, backlink checking and keyword explorer tools.The company's revenue was reportedly $40 million per year and growing at 60% YoY.

Ahrefs Headquarter Location


Predict your next investment

The CB Insights tech market intelligence platform analyzes millions of data points on venture capital, startups, patents , partnerships and news mentions to help you see tomorrow's opportunities, today.

Research containing Ahrefs

Get data-driven expert analysis from the CB Insights Intelligence Unit.

CB Insights Intelligence Analysts have mentioned Ahrefs in 1 CB Insights research brief, most recently on Dec 8, 2020.

Latest Ahrefs News

Using Python + Streamlit To Find Striking Distance Keyword Opportunities

May 12, 2022

Using Python + Streamlit To Find Striking Distance Keyword Opportunities 0 0 14 minutes read Python is an excellent tool to automate repetitive tasks as well as gain additional insights into data. In this article, you’ll learn how to build a tool to check which keywords are close to ranking in positions one to three and advises whether there is an opportunity to naturally work those keywords into the page. It’s perfect for Python beginners and pros alike and is a great introduction to using Python for SEO. If you’d just like to get stuck in there’s a handy Streamlit app available for the code. This is simple to use and requires no coding experience. There’s also a Google Colaboratory Sheet if you’d like to poke around with the code. If you can crawl a website, you can use this script! Here’s an example of what we’ll be making today: Screenshot from Microsoft Excel, October 2021 These keywords are found in the page title and H1, but not in the copy. Adding these keywords naturally to the existing copy would be an easy way to increase relevancy for these keywords. By taking the hint from search engines and naturally including any missing keywords a site already ranks for, we increase the confidence of search engines to rank those keywords higher in the SERPs. This report can be created manually, but it’s pretty time-consuming. So, we’re going to automate the process using a Python SEO script. Preview Of The Output Screenshot from Microsoft Excel, October 2021 The final output takes the top five opportunities by search volume for each page and neatly lays each one horizontally along with the estimated search volume. It also shows the total search volume of all keywords a page has within striking distance, as well as the total number of keywords within reach. The top five keywords by search volume are then checked to see if they are found in the title, H1, or copy, then flagged TRUE or FALSE. This is great for finding quick wins! Just add the missing keyword naturally into the page copy, title, or H1. Getting Started The setup is fairly straightforward. We just need a crawl of the site (ideally with a custom extraction for the copy you’d like to check), and an exported file of all keywords a site ranks for. This post will walk you through the setup, the code, and will link to a Google Colaboratory sheet if you just want to get stuck in without coding it yourself. To get started you will need: We’ve named this the Striking Distance Report as it flags keywords that are easily within striking distance. (We have defined striking distance as keywords that rank in positions four to 20, but have made this a configurable option in case you would like to define your own parameters.) Striking Distance SEO Report: Getting Started 1. Crawl The Target Website Set a custom extractor for the page copy (optional, but recommended). Filter out pagination pages from the crawl. 2. Export All Keywords The Site Ranks For Using Your Favorite Provider Filter keywords that trigger as a site link. Remove keywords that trigger as an image. Filter branded keywords. Use both exports to create an actionable Striking Distance report from the keyword and crawl data with Python. Crawling The Site I’ve opted to use Screaming Frog to get the initial crawl. Any crawler will work, so long as the CSV export uses the same column names or they’re renamed to match. The script expects to find the following columns in the crawl CSV export: "Address", "Title 1", "H1-1", "Copy 1", "Indexability" Crawl Settings The first thing to do is to head over to the main configuration settings within Screaming Frog: Configuration > Spider > Crawl Crawl Internal Links, Canonicals, and the Pagination (Rel Next/Prev) setting. (The script will work with everything else selected, but the crawl will take longer to complete!) Screenshot from Screaming Frog, October 2021 Next, it’s on to the Extraction tab. Configuration > Spider > Extraction Screenshot from Screaming Frog, October 2021 At a bare minimum, we need to extract the page title, H1, and calculate whether the page is indexable as shown below. Indexability is useful because it’s an easy way for the script to identify which URLs to drop in one go, leaving only keywords that are eligible to rank in the SERPs. If the script cannot find the indexability column, it’ll still work as normal but won’t differentiate between pages that can and cannot rank. Setting A Custom Extractor For Page Copy In order to check whether a keyword is found within the page copy, we need to set a custom extractor in Screaming Frog. Configuration > Custom > Extraction Screenshot from Screaming Frog, October 2021 Important: The script expects the extractor to be named “Copy” as above, so please double check! Lastly, make sure Extract Text is selected to export the copy as text, rather than HTML. There are many guides on using custom extractors online if you need help setting one up, so I won’t go over it again here. Once the extraction has been set it’s time to crawl the site and export the HTML file in CSV format. Exporting The CSV File Exporting the CSV file is as easy as changing the drop-down menu displayed underneath Internal to HTML and pressing the Export button. Internal > HTML > Export Screenshot from Screaming Frog, October 2021 After clicking Export, It’s important to make sure the type is set to CSV format. The export screen should look like the below: Screenshot from Screaming Frog, October 2021 Tip 1: Filtering Out Pagination Pages I recommend filtering out pagination pages from your crawl either by selecting Respect Next/Prev under the Advanced settings (or just deleting them from the CSV file, if you prefer). Screenshot from Screaming Frog, October 2021 Tip 2: Saving The Crawl Settings Once you have set the crawl up, it’s worth just saving the crawl settings (which will also remember the custom extraction). This will save a lot of time if you want to use the script again in the future. File > Configuration > Save As Exporting Keywords Once we have the crawl file, the next step is to load your favorite keyword research tool and export all of the keywords a site ranks for. The goal here is to export all the keywords a site ranks for, filtering out branded keywords and any which triggered as a sitelink or image. For this example, I’m using the Organic Keyword Report in Ahrefs, but it will work just as well with Semrush if that’s your preferred tool. In Ahrefs, enter the domain you’d like to check in Site Explorer and choose Organic Keywords. Screenshot from Ahrefs.com, October 2021 Site Explorer > Organic Keywords This will bring up all keywords the site is ranking for. Filtering Out Sitelinks And Image links The next step is to filter out any keywords triggered as a sitelink or an image pack. The reason we need to filter out sitelinks is that they have no influence on the parent URL ranking. This is because only the parent page technically ranks for the keyword, not the sitelink URLs displayed under it. Filtering out sitelinks will ensure that we are optimizing the correct page. Screenshot from Ahrefs.com, October 2021 Here’s how to do it in Ahrefs. Screenshot from Ahrefs.com, October 2021 Lastly, I recommend filtering out any branded keywords. You can do this by filtering the CSV output directly, or by pre-filtering in the keyword tool of your choice before the export. Finally, when exporting make sure to choose Full Export and the UTF-8 format as shown below. Screenshot from Ahrefs.com, October 2021 By default, the script works with Ahrefs (v1/v2) and Semrush keyword exports. It can work with any keyword CSV file as long as the column names the script expects are present. Processing The following instructions pertain to running a Google Colaboratory sheet to execute the code. There is now a simpler option for those that prefer it in the form of a Streamlit app. Simply follow the instructions provided to upload your crawl and keyword file. Now that we have our exported files, all that’s left to be done is to upload them to the Google Colaboratory sheet for processing. Select Runtime > Run all from the top navigation to run all cells in the sheet. Screenshot from Colab.research.google.com, October 2021 The script will prompt you to upload the keyword CSV from Ahrefs or Semrush first and the crawl file afterward. Screenshot from Colab.research.google.com, October 2021 That’s it! The script will automatically download an actionable CSV file you can use to optimize your site. Screenshot from Microsoft Excel, October 2021 Once you’re familiar with the whole process, using the script is really straightforward. Code Breakdown And Explanation If you’re learning Python for SEO and interested in what the code is doing to produce the report, stick around for the code walkthrough! Install The Libraries !pip install pandas import pandas as pdfrom pandas import DataFrame, Seriesfrom typing import Unionfrom google.colab import files Set The Variables The script considers any keywords between positions four and 20 as within striking distance. Changing the variables here will let you define your own range if desired. It’s worth experimenting with the settings to get the best possible output for your needs. # set all variables heremin_volume = 10 # set the minimum search volumemin_position = 4 # set the minimum position / default = 4max_position = 20 # set the maximum position / default = 20drop_all_true = True # If all checks (h1/title/copy) are true, remove the recommendation (Nothing to do)pagination_filters = "filterby|page|p=" # filter patterns used to detect and drop paginated pages Upload The Keyword Export CSV File The next step is to read in the list of keywords from the CSV file. It is set up to accept an Ahrefs report (V1 and V2) as well as a Semrush export. This code reads in the CSV file into a Pandas DataFrame. upload = files.upload()upload = list(upload.keys())[0]df_keywords = pd.read_csv( (upload), error_bad_lines=False, low_memory=False, encoding="utf8", dtype={ "URL": "str", "Keyword": "str", "Volume": "str", "Position": int, "Current URL": "str", "Search Volume": int, },)print("Uploaded Keyword CSV File Successfully!") If everything went to plan, you’ll see a preview of the DataFrame created from the keyword CSV export. Screenshot from Colab.research.google.com, October 2021 Upload The Crawl Export CSV File Once the keywords have been imported, it’s time to upload the crawl file. This fairly simple piece of code reads in the crawl with some error handling option and creates a Pandas DataFrame named df_crawl.upload = files.upload()upload = list(upload.keys())[0]df_crawl = pd.read_csv( (upload), error_bad_lines=False, low_memory=False, encoding="utf8", dtype="str", )print("Uploaded Crawl Dataframe Successfully!") Once the CSV file has finished uploading, you’ll see a preview of the DataFrame. Screenshot from Colab.research.google.com, October 2021 Clean And Standardize The Keyword Data The next step is to rename the column names to ensure standardization between the most common types of file exports. Essentially, we’re getting the keyword DataFrame into a good state and filtering using cutoffs defined by the variables. df_keywords.rename( columns={ "Current position": "Position", "Current URL": "URL", "Search Volume": "Volume", }, inplace=True,)# keep only the following columns from the keyword dataframecols = "URL", "Keyword", "Volume", "Position"df_keywords = df_keywords.reindex(columns=cols)try: # clean the data. (v1 of the ahrefs keyword export combines strings and ints in the volume column) df_keywords["Volume"] = df_keywords["Volume"].str.replace("0-10", "0")except AttributeError: pass# clean the keyword datadf_keywords = df_keywords[df_keywords["URL"].notna()] # remove any missing valuesdf_keywords = df_keywords[df_keywords["Volume"].notna()] # remove any missing valuesdf_keywords = df_keywords.astype({"Volume": int}) # change data type to intdf_keywords = df_keywords.sort_values(by="Volume", ascending=False) # sort by highest vol to keep the top opportunity# make new dataframe to merge search volume back in laterdf_keyword_vol = df_keywords[["Keyword", "Volume"]]# drop rows if minimum search volume doesn't match specified criteriadf_keywords.loc[df_keywords["Volume"] < min_volume, "Volume_Too_Low"] = "drop"df_keywords = df_keywords[~df_keywords["Volume_Too_Low"].isin(["drop"])]# drop rows if minimum search position doesn't match specified criteriadf_keywords.loc[df_keywords["Position"] <= min_position, "Position_Too_High"] = "drop"df_keywords = df_keywords[~df_keywords["Position_Too_High"].isin(["drop"])]# drop rows if maximum search position doesn't match specified criteriadf_keywords.loc[df_keywords["Position"] >= max_position, "Position_Too_Low"] = "drop"df_keywords = df_keywords[~df_keywords["Position_Too_Low"].isin(["drop"])] Clean And Standardize The Crawl Data Next, we need to clean and standardize the crawl data. Essentially, we use reindex to only keep the “Address,” “Indexability,” “Page Title,” “H1-1,” and “Copy 1” columns, discarding the rest. We use the handy “Indexability” column to only keep rows that are indexable. This will drop canonicalized URLs, redirects, and so on. I recommend enabling this option in the crawl. Lastly, we standardize the column names so they’re a little nicer to work with. # keep only the following columns from the crawl dataframecols = "Address", "Indexability", "Title 1", "H1-1", "Copy 1"df_crawl = df_crawl.reindex(columns=cols)# drop non-indexable rowsdf_crawl = df_crawl[~df_crawl["Indexability"].isin(["Non-Indexable"])]# standardise the column namesdf_crawl.rename(columns={"Address": "URL", "Title 1": "Title", "H1-1": "H1", "Copy 1": "Copy"}, inplace=True)df_crawl.head() Group The Keywords As we approach the final output, it’s necessary to group our keywords together to calculate the total opportunity for each page. Here, we’re calculating how many keywords are within striking distance for each page, along with the combined search volume. # groups the URLs (remove the dupes and combines stats)# make a copy of the keywords dataframe for grouping - this ensures stats can be merged back in later from the OG dfdf_keywords_group = df_keywords.copy()df_keywords_group["KWs in Striking Dist."] = 1 # used to count the number of keywords in striking distancedf_keywords_group = ( df_keywords_group.groupby("URL") .agg({"Volume": "sum", "KWs in Striking Dist. ": "count"}) .reset_index())df_keywords_group.head() Screenshot from Colab.research.google.com, October 2021 Once complete, you’ll see a preview of the DataFrame. Display Keywords In Adjacent Rows We use the grouped data as the basis for the final output. We use Pandas.unstack to reshape the DataFrame to display the keywords in the style of a GrepWords export. Screenshot from Colab.research.google.com, October 2021 # create a new df, combine the merged data with the original data. display in adjacent rows ala grepwordsdf_merged_all_kws = df_keywords_group.merge( df_keywords.groupby("URL")["Keyword"] .apply(lambda x: x.reset_index(drop=True)) .unstack() .reset_index())# sort by biggest opportunitydf_merged_all_kws = df_merged_all_kws.sort_values( by="KWs in Striking Dist. ", ascending=False)# reindex the columns to keep just the top five keywordscols = "URL", "Volume", "KWs in Striking Dist. ", 0, 1, 2, 3, 4df_merged_all_kws = df_merged_all_kws.reindex(columns=cols)# create union and rename the columnsdf_striking: Union[Series, DataFrame, None] = df_merged_all_kws.rename( columns={ "Volume": "Striking Dist. Vol", 0: "KW1", 1: "KW2", 2: "KW3", 3: "KW4", 4: "KW5", })# merges striking distance df with crawl df to merge in the title, h1 and category descriptiondf_striking = pd.merge(df_striking, df_crawl, on="URL", how="inner") Set The Final Column Order And Insert Placeholder Columns Lastly, we set the final column order and merge in the original keyword data. There are a lot of columns to sort and create! # set the final column order and merge the keyword data incols = [ "URL", "Title", "H1", "Copy", "Striking Dist. Vol", "KWs in Striking Dist. ", "KW1", "KW1 Vol", "KW1 in Title", "KW1 in H1", "KW1 in Copy", "KW2", "KW2 Vol", "KW2 in Title", "KW2 in H1", "KW2 in Copy", "KW3", "KW3 Vol", "KW3 in Title", "KW3 in H1", "KW3 in Copy", "KW4", "KW4 Vol", "KW4 in Title", "KW4 in H1", "KW4 in Copy", "KW5", "KW5 Vol", "KW5 in Title", "KW5 in H1", "KW5 in Copy",]# re-index the columns to place them in a logical order + inserts new blank columns for kw checks.df_striking = df_striking.reindex(columns=cols) Merge In The Keyword Data For Each Column This code merges the keyword volume data back into the DataFrame. It’s more or less the equivalent of an Excel VLOOKUP function. # merge in keyword data for each keyword column (KW1 - KW5)df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW1", right_on="Keyword", how="left")df_striking['KW1 Vol'] = df_striking['Volume']df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW2", right_on="Keyword", how="left")df_striking['KW2 Vol'] = df_striking['Volume']df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW3", right_on="Keyword", how="left")df_striking['KW3 Vol'] = df_striking['Volume']df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW4", right_on="Keyword", how="left")df_striking['KW4 Vol'] = df_striking['Volume']df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW5", right_on="Keyword", how="left")df_striking['KW5 Vol'] = df_striking['Volume']df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True) Clean The Data Some More The data requires additional cleaning to populate empty values, (NaNs), as empty strings. This improves the readability of the final output by creating blank cells, instead of cells populated with NaN string values. Next, we convert the columns to lowercase so that they match when checking whether a target keyword is featured in a specific column. # replace nan values with empty stringsdf_striking = df_striking.fillna("")# drop the title, h1 and category description to lower case so kws can be matched to themdf_striking["Title"] = df_striking["Title"].str.lower()df_striking["H1"] = df_striking["H1"].str.lower()df_striking["Copy"] = df_striking["Copy"].str.lower() Check Whether The Keyword Appears In The Title/H1/Copy and Return True Or False This code checks if the target keyword is found in the page title/H1 or copy. It’ll flag true or false depending on whether a keyword was found within the on-page elements. df_striking["KW1 in Title"] = df_striking.apply(lambda row: row["KW1"] in row["Title"], axis=1)df_striking["KW1 in H1"] = df_striking.apply(lambda row: row["KW1"] in row["H1"], axis=1)df_striking["KW1 in Copy"] = df_striking.apply(lambda row: row["KW1"] in row["Copy"], axis=1)df_striking["KW2 in Title"] = df_striking.apply(lambda row: row["KW2"] in row["Title"], axis=1)df_striking["KW2 in H1"] = df_striking.apply(lambda row: row["KW2"] in row["H1"], axis=1)df_striking["KW2 in Copy"] = df_striking.apply(lambda row: row["KW2"] in row["Copy"], axis=1)df_striking["KW3 in Title"] = df_striking.apply(lambda row: row["KW3"] in row["Title"], axis=1)df_striking["KW3 in H1"] = df_striking.apply(lambda row: row["KW3"] in row["H1"], axis=1)df_striking["KW3 in Copy"] = df_striking.apply(lambda row: row["KW3"] in row["Copy"], axis=1)df_striking["KW4 in Title"] = df_striking.apply(lambda row: row["KW4"] in row["Title"], axis=1)df_striking["KW4 in H1"] = df_striking.apply(lambda row: row["KW4"] in row["H1"], axis=1)df_striking["KW4 in Copy"] = df_striking.apply(lambda row: row["KW4"] in row["Copy"], axis=1)df_striking["KW5 in Title"] = df_striking.apply(lambda row: row["KW5"] in row["Title"], axis=1)df_striking["KW5 in H1"] = df_striking.apply(lambda row: row["KW5"] in row["H1"], axis=1)df_striking["KW5 in Copy"] = df_striking.apply(lambda row: row["KW5"] in row["Copy"], axis=1) Delete True/False Values If There Is No Keyword This will delete true/false values when there is no keyword adjacent. # delete true / false values if there is no keyworddf_striking.loc[df_striking["KW1"] == "", ["KW1 in Title", "KW1 in H1", "KW1 in Copy"]] = ""df_striking.loc[df_striking["KW2"] == "", ["KW2 in Title", "KW2 in H1", "KW2 in Copy"]] = ""df_striking.loc[df_striking["KW3"] == "", ["KW3 in Title", "KW3 in H1", "KW3 in Copy"]] = ""df_striking.loc[df_striking["KW4"] == "", ["KW4 in Title", "KW4 in H1", "KW4 in Copy"]] = ""df_striking.loc[df_striking["KW5"] == "", ["KW5 in Title", "KW5 in H1", "KW5 in Copy"]] = ""df_striking.head() Drop Rows If All Values == True This configurable option is really useful for reducing the amount of QA time required for the final output by dropping the keyword opportunity from the final output if it is found in all three columns. def true_dropper(col1, col2, col3): drop = df_striking.drop( df_striking[ (df_striking[col1] == True) & (df_striking[col2] == True) & (df_striking[col3] == True) ].index ) return dropif drop_all_true == True: df_striking = true_dropper("KW1 in Title", "KW1 in H1", "KW1 in Copy") df_striking = true_dropper("KW2 in Title", "KW2 in H1", "KW2 in Copy") df_striking = true_dropper("KW3 in Title", "KW3 in H1", "KW3 in Copy") df_striking = true_dropper("KW4 in Title", "KW4 in H1", "KW4 in Copy") df_striking = true_dropper("KW5 in Title", "KW5 in H1", "KW5 in Copy") Download The CSV File df_striking.to_csv('Keywords in Striking Distance.csv', index=False)files.download("Keywords in Striking Distance.csv") Conclusion If you are looking for quick wins for any website, the striking distance report is a really easy way to find them. Don’t let the number of steps fool you. It’s not as complex as it seems. It’s as simple as uploading a crawl and keyword export to the supplied Google Colab sheet or using the Streamlit app. The results are definitely worth it! More Resources:

Ahrefs Web Traffic

Page Views per User (PVPU)
Page Views per Million (PVPM)
Reach per Million (RPM)
CBI Logo

Ahrefs Rank

You May Also Like

Botify Logo

Botify is a SaaS platform providing search marketers with data and intelligence to optimize their sites and structurally increase organic, social, and mobile traffic.

DeepCrawl Logo

DeepCrawl provides a comprehensive website crawler tool to understand and monitor technical issues to improve SEO performance. By identifying and monitoring key issues that could affect SEO performance, it allows marketing teams at brands and digital agencies to optimize the site for users and search engines alike.

peekd Logo

peekd uses real-time sales data of hundreds of thousands of 3p e-commerce stores to make reliable, granular data available to brands and help them succeed by selling more online.

OnCrawl Logo

OnCrawl is an SEO analysis tool and log analyzer for enterprise SEO audits and daily monitoring. On February 23rd, 2022, OnCrawl was acquired by BrightEdge. The terms of the transaction were not disclosed.

iSpionage Logo

iSpionage develops SaaS-based competitive intelligence solutions for search marketing, SEO and paid search advertising.

SimilarWeb Logo

SimilarWeb (NYSE: SMWB) is an online platform for measuring online behavior which helps marketers discover their most important traffic insights. The company works with a wide variety of sources to create the most accurate and reliable picture of the digital world. All of this data is fed into SimilarWeb's data processing servers where the company turns billions of daily data points into insightful information.

Discover the right solution for your team

The CB Insights tech market intelligence platform analyzes millions of data points on vendors, products, partnerships, and patents to help your team find their next technology solution.

Request a demo

CBI websites generally use certain cookies to enable better interactions with our sites and services. Use of these cookies, which may be stored on your device, permits us to improve and customize your experience. You can read more about your cookie choices at our privacy policy here. By continuing to use this site you are consenting to these choices.