To surpass your outreach goals, you need data on your campaigns and your results. And lots of it.
But what exactly are you supposed to do with all of that data?
How do you find those secrets to outreach success hiding in plain sight?
And where do you even start?
You’re about to learn how to improve your link building outreach using hard data, courtesy of Ben Dahlman, Operations Analyst, and Jordan Yocum, Leads Coordinator, at Page One Power.
During our webinar on how to refine your outreach campaigns with data, we covered multiple tactics, best practices, and ideas that can help you improve your link building efforts. Now here it all is, in written form for your perusal. (Though, if you’d prefer to receive this information in its original audio and video glory, you can get the recording.)
So let’s dive in.
Why does data matter?
Before we dive into the “how” of using your data, we need to understand the “why,” specifically, why it matters to us.
And to do that, we’re going to look to a professional video game player.
This one, specifically.
Professional Street Fighter player, Daigo Umehara, wrote a book called The Will to Keep Winning. In it, he summarized his thoughts on how to master a skill, and the different levels of mastery one can achieve, with the top two levels being Expert and Master.
According to his definitions, an Expert is someone who makes the right moves, and thinks analytically about their choices. A Master, on the other hand, is someone who makes the right moves intuitively.
To illustrate it in another way, “A Master will do the things that normally work, and they normally work,” as a quote from the book states it.
You can apply this same thinking to the skill of link building. You can get to the point where you intuitively will know how to get the best results. Collecting and analyzing your data over a long period of time is how you’ll get there. That’s why data is so important.
Success metrics: Why you need them, and how to use them in your analysis
In order to start honing your link building skills, you first need to define your goals and create a model of success. After all, setting measurable goals gives you a clear objective to work toward. Without any goals, it’s hard to know whether you’re successful or not. Without any knowledge of success, it’s near impossible to create a model of success that you can then base future campaigns on. And without that, you can kiss mastery goodbye.
Succes metrics in action: Spacedogs.com
For the sake of illustrating the importance of success metrics, let’s say we have a website called spacedogs.com.
Spacedogs.com: Leland Melvin approved.
We want to set up ourselves for success, so we need to set a goal for our campaign. Let’s say we want to earn 10 links per month in order to increase our organic traffic, and we want to run the campaign for a total of 12 months. (If you’d like to learn more about setting link building goals, check out this Search Engine Watch article.)
So now that we have our goal, how do we calculate how much input we’ll need in order to achieve our goal?
Answer: Look to past campaigns
You can use data from past campaigns to determine how much effort you need to reach your goal for your new campaigns. This is an especially good tactic to use if you’ve run several link building campaigns before, or if you work in a large organization where you can leverage other team members’ past campaigns.
Let’s say in our example we have one past campaign where we reached out to 100 sites, and acquired 10 links. We had a conversion rate of 10%. Based on those numbers, we can estimate that we’ll want to reach out to around 100 websites again.
Obviously that’s overly simplified, so let’s dig into how you can analyze several past campaigns’ data to prepare for an upcoming campaign.
(If you’re just getting started with link building or don’t have past data available to you, now is the time to start tracking. If you’re a small team and running smaller campaigns, you can probably get away with using a spreadsheet or Google sheet for a little while, though it’ll take a lot of extra work. If you’re a larger team or planning on sending outreach more regularly, it’s a good idea to invest in a CRM that will automatically track all of you and your team’s outreach for you.)
How to analyze past data to prepare for upcoming campaigns
To analyze past campaigns, we first want to export our past data (or get it all in one place), then manipulate it to give us a good idea of what to expect. This is one of the many places where Microsoft Excel or Google Sheets will come in handy.
Let’s establish the specific parameters we want to compare, define how we’ll measure them, and show what they might look like in a chart.
Parameter 1: Number of links earned
The first parameter we’ll want to track is the number of successful placements we earned. That is, how many of our links got accepted.
When we visually represent our earned links, we’ll place it on our y axis.
However, that parameter alone doesn’t paint us a very clear picture. We need to see how that number has changed over time. Hence, we add our second parameter.
Parameter 2: Time
Our second parameter is time, and it goes on our x-axis. If we update our chart with that parameter (in this case, time broken out into month by month), it now looks like this:
This updated chart gives us a much clearer picture of our performance and success, indicating how many links we earned month over month and giving us the ability to compare certain months to others. However, we can still get even more specific.
Parameter 3: Segmentation
To further breakdown our data, we can add forms of segmentation to help us better define our past results. In this case, we’ll add the types of links we acquired to our graph.
Now we can see how different link types were earned over time, and use that information when planning a future campaign. Based on this graph, if I wanted to make sure a campaign of mine would garner most of its links in its later months, I’d want to plan to write content to earn content links based on the data from months 8 to 12.
How to research and track new leads for leverage later
Now that we have a visualization of what we’re aiming for based on past campaigns, it’s time to start researching the types of sites that are proven to help us reach our goals.
Two of the ways we can do that is to:
- Categorize sites by the method we use to acquire links
- Understand the types of links we want from our targets
Basically, we’ll want to break down potential leads by link type, and start including link type in our database. Link type, as defined by the Page One Power team, is both the method, and the content needed to acquire a link at a site. For example, some link types could be guest post, infographic, or resource page.
Now, as we do our research and compile our list of sites to reach out to, we’ll include link type for later analysis. However, it behooves us to get even more detailed.
When researching websites and adding them to your campaigns, also include for every lead in your database:
- Industry
- Target audience
- Authority
- How you found the website
- Contact email address
- Notes (ie: “This author has tweeted with Stephen before, so send the outreach from Stephen’s email.”)
- The desired linking page (if you’re using BuzzStream)
- And last but not least, link type (as stated earlier)
Feel free to create additional data fields to help classify data even further.
Bonus tip for BuzzStream users: You can even specify the type of data you’d like to collect in your additional data fields to make it easier to manipulate later on.
Bonus tip #2 for BuzzStream users: You can enter the page you’d like to acquire a link to in BuzzStream’s Link Monitoring tab, and BuzzStream will monitor the page for you and let you know if your link goes live. This can be especially handy when you have a campaign going on, since webmasters don’t always remember to let you know when they’ve added a link to your content.
Email outreach in a data-centered way
Once we’ve gathered all of our information on all of our websites, it’s time to send our data-influenced outreach, and make sure we’re tracking it.
At a minimum, we’ll want to make sure we include AIDA in our emails: Attention, Interest, Desire, and Action. But what does that all entail?
Thankfully, high quality outreach doesn’t need to be complicated. As the master link builders at Page One Power will tell you, to achieve AIDA, simply:
- Provide value
- Be clear about what you want
- Detail how to comply with the request
As always, you should personalize your outreach for each site/person you reach out to. However, you still want to make sure to follow the same basic email template in order to evaluate it later. If you write a custom email for every prospect, it’ll be hard to track what really worked and what didn’t when you’re doing your post-mortem analysis on your campaign.
A/B Testing your outreach
You’ve probably heard about A/B testing a lot, but what does it look like when you specifically apply it to your outreach?
First, let’s establish the basics of A/B testing. A/B testing can basically be broken down into four steps:
- Establish a control: Basically, we want an original result we can compare everything else to
- Define what variables we’re going to test
- Form a hypothesis: “If I change x, I expect y to happen”
- Test it!
Basically, we want to establish an original result, then we want to track all of our different variants and how they’re all performing in relation to each other to see if our hypothesis is correct.
A/B Testing your outreach in BuzzStream
If you use BuzzStream, you can use it to manage your templates and gather numerical data on the results of your outreach. To do A/B testing in BuzzStream, create different templates for every variable you’d like to test. That way, your data will be clearly segmented, and easy to verify.
For example, if I wanted to test my subject line, I would create two templates with different subject lines, then be able to easily compare them.
You can find more ideas for A/B testing from Kissmetrics’ guide on A/B testing emails. Now, not all of the ideas are directly applicable, since some are more specific to marketing emails, but browsing the list can give you some ideas.
For example, one idea from Kissmetrics’ list is to test whether to include a customer testimonial or not. You probably won’t need to test that in your email outreach, but you can test whether to name-drop a brand you’ve worked with before that you think your recipient may know.
If you’re looking for even more inspiration, you can check out this list of 150 A/B test ideas (though a couple of them are pretty out there.) Again, not an exact 1:1, but can act as a good starting point in the brainstorming process.
Tracking the development of relationships
To further define my outreach results, I’ll want to make sure to use relationship stages, particularly custom ones. (You can also track relationship stages outside of BuzzStream, though it won’t be quite as easy.)
With relationship stages, you get a quick and clear status update of where exactly your lead is in the outreach process. You can tell whether you’ve started your outreach, whether you’re currently talking, or if your outreach has reached its conclusion.
So relationship stage is one more dimension you’ll want to capture in your database. Additionally, if you didn’t get a link, what happened?
Capture the reason for the rejection in your notes. If you’re using BuzzStream, you can create a Custom Field, specify that you’d like it to be a dropdown, and list the common reasons someone rejects your link. That way, the data won’t be so unique that it’s hard to parse (like every team member writing in their own words what went wrong, which would take extra time to distill into a usable form.)
Analyzing the results of your outreach
You did all of the preparation. You sent all of your outreach emails and follow-ups. Now here comes the fun part of your outreach campaign: the analysis! If you use BuzzStream, now is the time to export your data so you can further manipulate and analyze it in Excel or Google Sheets.
First step in our analysis process: Seeing what our output was, and how close we got to achieving our goals.
How to use data to determine if we met our goals
As we established earlier, our goal was 10 links per month, and our campaign ran for 12 months. That means our overall goal was 120 links over 12 months.
Let’s say, for the sake of our example, we did end up getting 120 links over 12 months. Huzzah! Once we chart out our results on our graph, we can see that although some months we didn’t get to our goal of 10 links a month, we had other months where we surpassed our goal, so it averaged out to a success.
That’s why it’s important to track your results over time, because if we took a month by month approach, we may have been tempted to drastically change our outreach, when our outreach was not the problem.. Not only that, but in our low months, it’d be a very bumpy ride emotionally.
So we know we met our goal, and we know that results varied from month to month. But what else is the data telling us?
Finding performance trends in your data, and what they mean
If we look at the trend of our link acquisition, we can see that overall, we’re tracking up and to the right. That means, as our 12-month campaign progressed, we got better and better at acquiring links.
So not only did we reach our goals this campaign, but as we continue sending more outreach and running future campaigns, we should average higher than 10 links per month (assuming all other variables stay the same.)
Another aspect to note are the three peaks we see in our graph.
Correspondingly, we see the associated valleys as well. Again, this is a good reason to take a further out view of our results before saying a month was a success or failure right off the bat, because some months can be better than others in certain industries. (For example, even the most convincing outreach email isn’t likely to make a dent in the education industry during summer break.)
Using data to determine and track your conversion rates
Now we want to test conversion rates on the overall campaign. To do that, we’ll divide the total links acquired, by the total numbers of targets. In other words, the output divided by the input.
Keeping our numbers from our previous example, we’ll say we reached out to 100 targets, acquired 10 links, and thus have a conversion rate of 10%. Great! But that’s just the beginning.
The remaining 90%: Analyzing those who failed to convert
So our successes are only defining 10% of our total results. That means for 90% of our results, we don’t know what’s going on. If we only focus on what we’re doing right, we could be missing some low-hanging fruit in that remaining 90% of our targets.
One common false assumption is that anything other than a desired result must be bad. Let’s say we have these two situations:
- Certain template + certain number of follow-ups + certain piece of content = Link accepted!
- Different template + different amount of follow-ups + different type of content = Link rejected. 🙁
The combination in the first situation worked, and it might be easy to assume that everything we did was right, whereas everything we did in combination 2 was wrong.
However, what the team at Page One Power has found is that more often, certain combinations will get you over the threshold in order to earn you a link, meaning very rarely do you not get a link because every single aspect of your process was incorrect or bad.
So for that 90% that didn’t link, we don’t need to throw the baby out with the bathwater and assume every aspect of what we did was terrible. Instead, let’s look at the different aspects of our outreach campaign individually, and compare them to other campaigns and see if we notice a trend.
For example, do you see a trend in fewer links overall when you send only two follow-up emails instead of three? Now is the time to isolate your campaign factors, dig into the data, and then decide which ones you’d like to test or change for the next campaign.
Analyzing relationship stages
Relationship stages can help you get to the bottom of the remaining 90% of your leads. Specifically, we want to ask: What relationship stages are the other 90% of leads in? And how do those ratios compare to each other? This will help you determine bottlenecks and find where possible problems lie.
To do this, we compare the number of sites in each relationship stage to the total number of sites we reached out to. We want to figure out the ratios for:
- How many people never responded?
- How many people said no?
- How many people are still deciding? (That is, have questions, or requests to change the content before linking to it, etc.)
- How many people did we never reach out to? (This relationship stage makes more sense if you’re working with a large number of websites. If you have a large input goal like 100 websites, it’s fairly likely you won’t get to every single one in real life.)
- How many people are we still attempting to reach? (Maybe you reached out to the wrong person, or still following-up because these sites have such high authority.)
Tip from Ben: You should adjust your efforts based on how authoritative or how good of a link it might be. It’s great to have guidelines, like sending three follow-ups at a maximum, but you should always give yourself room to make exceptions to the rule if a publication or website is a particularly good and valuable fit.
So for the purpose of our example, let’s say this is what we found when we dove into our spacedogs.com campaign relationship stages:
- 10% – Accepted link
- 20% – Rejected link
- 25% – Never responded
- 15% – Still negotiating
- 20% – Still attempting to reach
- 10% – Not started
(Note that these percentages, while fictional, are similar to real numbers the Page One Power team sees with conversion rates around 10%.)
Now as a single snapshot in time, we can pinpoint where our campaign got bottlenecked or tripped up, which is helpful. But the real benefit comes from documenting these ratios over time. We want to look at how the ratios change in comparison to each other over time (month over month, or even year over year if you have the data), and most importantly, how they change in comparison to our goal.
How to track changing ratios to find opportunities for improvement
So what do these changing ratios mean? For the sake of this test, let’s only look at sites that have been in our system for at least a month, meaning they’ve been in BuzzStream, and we’ve already reached out to them and the conversations have reached their natural conclusions. That way, a bunch of sites we just sent outreach to yesterday won’t mess up our data.
Here are some problems or areas where improvements can be made, and what trends you might see in the data that would indicate that these are the problems:
Problem 1: Organizational issues
Possible indicators of an organizational issue is an increase in leads in the “Not started” or “Attempting to reach” relationship stage. Visually, it might look like this:
Again, we want to look at the percentages of leads “Not started” and “Attempting to reach,” not just the raw numbers, since our campaigns can vary in size.
Ideally, the percentage of “Not Started” would be zero, but at the very least, the percentage should be holding stable across campaigns, as opposed to increasing. Increasing “Not Started” leads can mean that your team is doing a better job at prospecting than actually reaching out.
We also want to look at the rate at which “Not Started” is increasing. If the percentage is continuing at a steady rate, it likely means that yes, there’s an organizational problem, but the problem isn’t compounding. That is, the trend isn’t an exponential increase. If it were, you would want to escalate the importance of fixing this problem.
Problem 2: Difficulty in targeting sites
A possible indicator of an issue in targeting the correct sites is an increase in leads in the “Rejected” relationship stage. Properly vetting sites is the foundation for our outreach, and if we don’t get this step right, it doesn’t matter how convincing or beautifully crafted our outreach is. We won’t hit our mark because our audience was the wrong one.
Let’s say we charted our percentage of leads “Rejected,” and got the same graph as the “Not Started” graph above. That is, a graph that is showing that the percentage of “Rejected” is increasing gradually, month over month.) Again, it’s not as bad as if it were increasing exponentially, but still a clear sign that the way your team is vetting sites needs to be reassessed.
If you’re struggling with qualifying leads, check out Andrew Dennis’ link prospecting tips and tricks, and Adria Saracino’s 2012 Definitive Guide to Qualifying a Link Prospect for some solid tips.
Problem 3: Your outreach
Possible indicators that you’re struggling with your outreach would be an increase in percentage of “Rejected” leads, and/or an increase in percentage of “Never responded” leads over time.
Common outreach issues are being unclear in what we’re asking for and emailing the wrong person. However, there are so many facets in an outreach email template that it can be hard to pinpoint in exact issue when you see a trend like the one described.
Ben and the Page One Power team has found that the best approach is to go with your gut, propose a possible solution, try it out in your next round of outreach, then track your results and see if there’s any movement.
Further analysis: Making connections between ratios
Overall, the lesson is that mapping our ratios over time can give us an idea of where we can spend our time in order to make the largest improvements in our outreach campaigns.
As we increase or decrease our output in relation to our goal, we also want to monitor which metrics are increasing and decreasing as well. That way, we can draw correlations between how certain metrics affect others. If we improve in one area, how should we expect that improvement to affect our output?
For example, if we reduce the percentage of leads “Rejected,” do we see an increase in accepted links? In that example, we’d expect the answer to be yes, but we can’t assume. What we might see is that we changed our email template, and although the percentage of “Rejected” leads goes down, the percentage of “Not responded” is going up. In that case, we can see where the problem is transitioning to. Our leads went from not liking what we had to say, to possibly not understanding what we’re trying to say, so we need to try rewriting our template.
Comparing input vs. output across campaigns
In a similar vein, when comparing different campaigns where we gave similar input, are there any significant changes to the output across the different projects?
To find that out, we’ll group together similarly-sized and scoped projects and compare how their data ratios compare over time to find differences. If there are any large discrepancies in their data, identify the variables that make each campaign unique. A 10% difference in output from one campaign to another probably isn’t enough to bat an eyelash at, but something like a 55% difference is enough to merit further digging.
Analyzing the results of your A/B testing
If you ran an A/B test, be sure to segment your data further by making the same comparisons and creating the same ratios for each of your A/B testing variants, instead of just for the campaign as a whole.
We’ll want to look at whether our A/B testing had any significant impact on the data we’re seeing. If we see a higher conversion rate for one version, or more accepted links, it’s pretty safe to say it’s a good improvement. However, just to be sure, Ben recommends you test these findings at least twice, and with at least 100 total leads per campaign (that is, 50 leads per variant) to confirm.
Analyzing for missing information
Lastly, is there any information we would like to have, but don’t have yet? Data that we can collect next time, or that we may already have and just aren’t looking at?
Let’s say your client wants a diverse link portfolio. They want content to be the link method, and they want to start seeing improvements relatively quickly.
If time was a really important factor to them, wouldn’t it be nice if you had data that could tell you about how long it takes different link methods to convert? You can analyze past campaigns by link method and see how long it took on average for different link types to get either a link or a rejection. Let’s say you did the analysis, and you found out that resource links take half the time to complete that content links do. That’s definitely information you’ll want to have when talking to these clients.
Ben and his team have actually done this analysis before, and noticed that unlinked brand mentions tend to convert faster than other link types. Keep that in mind when you need some quick wins.
Analyzing results by team member
It can also be useful to compare campaign results across multiple team members. This can help you better understand your team members’ specific strengths and weaknesses, and who to assign to different campaigns. Visualization, whether via creating those charts in Excel or leveraging another reporting tool, can really help here.
For example, when Page One Power did this analysis, they noticed that some of their team members were really stellar at outreach, while others were skilled at writing the content other sites want to share. As a result, they put their best talent in the most fitting roles, improving their production floor and increasing their success across the board.
Resources
Get the recording of the webinar here.
To learn more about the Page One Power team, be sure to check out their website, read their blog, and follow them on Twitter.