As software marketers, much of what we do is centered around understanding and influencing probability.
Targeting, segmentation, conversion funnels, ICP, buyer orchestration workflows are all exercises in optimizing or measuring the probability of a desired outcome. Demand marketing in particular is as much about devising and implementing experiments in probability as it is about finding buyers and getting their attention.
Intent data is a powerful signaling input to help us improve our marketing odds. It allows us to add a layer of actual user behavior to our models and processes and, hopefully, increase the probability that our marketing tactics capture the attention of the right people at the right time. When we see an account signaling purchasing intent e.g. by reading selection orientated content, we can use this information to modify our activity in multiple ways such as by prioritizing sales outreach to the account, accelerating content nurturing or increasing our ABM investment.
Where intent data begins to get really interesting is if we can extract contextual information from the signal and use it as a probability value, helping us to direct the choices we make for each account.
Let’s investigate three ways we use contextual information in intent data to improve the probability of our desired outcomes:
Content scoring
If intent data is generated from a consistent set of content sources (e.g. web pages) then we can attribute a score to each piece of content relative to the confidence we have that the topic is a signal of our desired outcome.
Each time we develop a messaging tactic for a new desired outcome, we simply rescore the content in relation to the desired outcome. Over time, this establishes multiple scoring sets, matching intent data to messaging tactics with the highest probability of success.
For example, let’s say we sell an HR software solution and want to improve our awareness and influence in the early stage of the software buyer’s journey.
Our objectives might be as follows:
- Desired outcome: be ‘first solution on the scene’, help early stage buyers frame the problem
- Target group: accounts and prospects in the “passive looking” stage
- Tactic: email track and LinkedIn advertising promoting content piece “Learning the language of HR software”
- Qualifying score: >0.50
In this example, we’ve attributed five possible scores to each piece of content we receive intent signals from, each relative to the probability that the reader of the content is at the early stages of the buyer journey.
The content views of each account are captured and scored, then averaged to determine a total score for the account. This could be derived from a single intent signal (session visit) or an aggregate of several signals (several sessions) over a set period of time.
Here, the signals from Premier Properties and Capri Casuals exceed our target so will be included in our early stage tactic. Closer examination of Jackson Steinem’s content consumption suggests they are closer to selection and implementation, therefore wouldn’t likely engage with our early stage messaging. Of course, we have alternative tactics for buyers such as these who are in the later stages of the buyer journey where this account signal will score highly!
Confidence scoring
Confidence scoring allows us to attribute a level of conviction to our expectation that a piece of intent data is a predictor of purchasing intent. Clearly, the greater our confidence, the higher the prioritization we should give the signal.
A credible proxy for confidence is how engaged an account is in our content. We can use frequency (measured by the number of separate visits to the content), and quantity, (measured by the aggregate number of content views) as an effective measure of account engagement. Frequency is an especially valuable indicator of engagement as the more persistent the account’s interest in our content, the more we can assume they are at some stage in the buyer journey e.g. currently selecting software. Weighting the value of each visit over each content view makes sense.
In the example below, we’ve scored the intent data signals from nine accounts over a 30 day period, valuing a separate visit as two times the value of a content view e.g. Confidence = (Frequency x2) + Aggregate views.
Combined scoring
By combining content and confidence scores we can generate a score that reflects both the relevance of the content the prospect has viewed, and the strength of the signal i.e. how much content has been consumed and how frequently. This gives us a meaningful indication of which accounts should be prioritized against our tactics.
If your content and confidence scores use different scales, like the ones we’ve used here, you’ll need to deploy a little math in order to standardize the two sets of numbers. For example, our content score uses five variables in a 0 - 1 range (0, 0.25, 0.5 etc ), and our confidence score is a sum of two integers. To create a single comparable set of values out of these we’ve normalized the data to a 0 - 1 scale and added them together:
In this case, content and confidence scores are equally weighted but we can use weighting to optimize the scoring depending on our intended tactic or our capacity. For example, if we wanted to broaden the number of new account dial opportunities for our sales reps we might increase the weighting of our confidence score so that emphasis is placed on account engagement over content relevance. On the other hand, if we want to run sales reps at closely targeted opportunities only, we might increase the weighting of the content score.
Cluster segmentation
Grouping related content into clusters establishes relationships between content pieces so we can assign marketing tactics based on our intent data. We can then make appropriate choices for our follow on messaging strategies based on what we believe a view of a content topic represents.
Clusters can be chopped up in a multitude of ways and tested against our tactics. They could be subject based (e.g.“cloud software”), problem based (e.g. “improving access to analytics”), or stage based (e.g. “demo guide”). It’s a particularly useful way to create some form of context if we only have one intent data signal (one content view) to work from.
For example, in the clusters below we’ve made some assumptions based on when each piece of our content is likely to be consumed in an HR software buyer’s selection journey. Here we’re assuming that if an account is reading a pricing guide or implementation article they are more likely to be at the end of the buying process than at the start.
Knowing this, or rather improving the probability that we know this, could enable us to deploy the following tactic:
- Trigger sales outreach to known contacts at the account
- Provide the sales rep with links to the other content from the Deciding cluster
- Trigger ABM advertising or email marketing using other content from the Deciding cluster
Conclusion
Clearly, there is a considerable random element to online content consumption and inevitably there are false positives and negatives in all intent data. However, using intent data effectively is an exercise in marginal gains, and extracting meaning from content topics and context is simply another fraction of a percentage point towards optimizing for our desired outcome. We may just have to get into the weeds a little first!
Thanks for taking the time to read my thoughts on intent data. If you’d like to share your opinions on this topic, I’d love to hear from you. Please email me directly at: richard@prospectpath.com