10 Tips for Running Effective Predictive Personalization Campaigns

When it comes to personalization there is a growing trend with using machine learning to predict and optimize for the user experience at a 1:1 level.

We call the collective use of these machine learning techniques as predictive personalization campaigns.

Predictive personalization refers to a type of campaign where a machine learning model is used to predict what is the best experience to serve a visitor based on current/historical performance and the user’s individual data profile (contextual bandit). Decisions are made in real-time and at a 1:1 level and the model makes use of all the data available about that user and also takes in the context about the location, content, and other factors that go into that experience.

The predictive campaign will send the majority of traffic to the experiences that the model predicts will perform best, exploiting those insights in real-time, but continue to hold out some traffic to continue learning and exploring performance trends for the other experiences.

Just like with an A/B test, you can test predicted campaigns vs. a control to determine if a statistically significant uplift is achieved. But unlike an A/B test, the expectation with predictive campaigns is that it is “always on” and that it’s constantly adjusting traffic to the right experience at a 1:1 level.

The adoption of predictive personalization campaigns is still in its early days for our industry. For many programs, experience and maturity with these techniques are still low but there is a growing interest, especially as the solutions on the market continue to grow.

In this article we discuss best practices we have acquired from years of running predictive campaigns across a range of mid-market and enterprise clients using the FunnelEnvy platform.

What Makes For More Effective Predictive Personalization Campaigns

Below is a list of our observations after running hundreds of personalization campaigns over the years on the optimal conditions that contribute to successful predictive personalization campaigns.

  1. A variety of user intent. With a predictive campaign the more varied in intent and behavior that exists with users the more valuable predictive models will be in detecting and predicting better user experiences. The more varied the signal the more value predictive decisions can play a role in deciding and predicting the best outcome for a range of user intentions.

A good example of this is with predictive campaigns on the homepage, where user intent typically varies across the board.

  1. A variety of goals to predict for. A variety of possible user outcomes to predict for is another valuable ingredient for a successful predictive campaign. Like with user intent, we want to capitalize on the strengths of predictive models, and the more varied outcomes to predict for usually results in better business outcomes.

Good example of this is with a campaign that rotates a variety of goals or user journeys as the primary offer and call to action (CTA). Like in B2B, where you may have 4-5 different offers like free trial, request demo, request pricing, book a Drift meeting, download whitepaper, or register for a webinar. Having multiple offers to present or predict usually results in better performance of the predictive models.

  1. A variety of goal values to predict for. Multiple goals are a good thing, and having multiple goals with different values is even more helpful to your predictive campaigns. Once again we are seeking out scenarios where we have multiple goal options, all with a range of perceived value. This becomes important because we want to give the model more detailed feedback on what is working and what is more valuable to the business. If like in the previous example, you had 5-6 B2B goals but all of them were valued the same, (say $100 per goal completion), all goals would be perceived as equal, providing less signal back to the model. However, if your goals vary widely in range from $50 to $1000 per lead, then the predictive model has far more data and data points to work with.

An example of goal variety we typically see with SaaS clients is that goal values will fall into 2 tiers, lower value content engagement goals (content download, webinars, video views), and higher value sales intent goals (request demo, request pricing, contact sales, free trial).

  1. Goals are aligned to business revenue. While goals can vary widely, it is important that your goals are either revenue goals or events that correlate closely with revenue. Ultimately, predictive campaigns do best when they predict which experience is the most valuable.Therefore it’s important that you set up your campaigns to predict high value outcomes like purchases, trial to paid conversions, MQL/SQL, and closed/won deals. Where possible avoid micro-conversion and vanity metrics as your primary KPIs whenever possible. Examples of vanity metrics would be clicks, page views, video plays, etc. Whenever your KPIs are not tied to revenue/business value the harder it will be to have predictive campaigns be effective for you.
  2. There is some version of a revenue journey that you can track and optimize for. The more diverse and complex your buyer journey is the more you will benefit from using a predictive campaign that can take in all the data and outcomes and predict for better outcomes for your users.

If anyone has the same exact journey, there is less to predict for. However, if your a typical SaaS business and you have a variety of SMB to enterprise offerings, and you offer self service and enterprise sales scenarios, or you have a longer sales funnel that includes MQLs/SQLs/Opportunity Stages, then identifying the valuable trends across multiple goals in that revenue journey is a specific challenge that machine learning models do better at predicting for.

  1. Testing of high impact placements on the page. This recommendation is not unique to predictive campaigns and holds true for all types of campaigns, including A/B tests. If you are going to run a predictive campaign, you need to run it in a high impact location for it to be most effective.So examples of good placements would be primary offer/CTA on page sections above the fold. We have seen predictive campaigns run on small content strips or on content sections below the fold, and while they may contribute value, it’s just not going to generate a significant impact compared to continually optimizing for the high impact locations.
  2. Higher volume of traffic and conversions. Another no brainer but worth calling out. You want to run campaigns on highly trafficked pages that correlate well to conversions and revenue (think product and pricing pages vs. community or blog pages).

You should be prioritizing opportunities where you can move the needle and truly impact revenue for your site.

  1. More available data attributes for predictions. Predictive models need data to provide relevant signals. In many of the above recommendations I call out that the more variety of intent and goal outcomes, the more data points the model has to work with for its predictions. The objective here is to generate more contextually relevant data attributes that can feed into the model. The strategy here is to instrument more meaningful data events and attributes to feed the predictive model, often referred to as feature engineering.

Examples here would be setting up more targeted audience segments (flagging prospects and customers for example), or integrating a new data source like a firmographics API from Demandbase or Clearbit, or integrating with your CDP, CRM or Marketing automation platform, or defining content and product affinities based on user behavior and pages visited. In each of these examples we are feeding additional relevant data around user intent, behavior, or data we have about our users. All of which can help a model identify additional experience/user matches that correlate better to revenue.

  1. Stakeholders understand and appreciate the differences between A/B testing and predictive campaigns. This is an organizational consideration, but when you run predictive campaigns, there are certain differences that you and the larger organization needs to be aware of and comfortable with. Unlike an A/B test, your experiences may not be sticky to a user over the life of the campaign (optional). Secondly, predictive campaigns are designed to maximize revenue, you may not have a clear winner or a straightforward outcome or winner after running the campaign. In predictive campaigns there is often no one winner. You find that different experiences perform better for different user segments. Some organizations are very comfortable with this reality, others struggle and prefer the simpler situations and outcomes that come with traditional A/B tests where you have a clear learning and you can full scale a single winner for the entire population.This is where you need to educate your organization on the pros and cons of running predictive campaigns.
  2. Stakeholders value and are responsible for revenue. While everyone in the company will say they value revenue, not everyone is responsible for generating revenue on behalf of your business. And that is ok. But, when it comes to predictive campaigns the best outcomes are usually achieved when the primary stakeholder is aligned to revenue targets and takes the responsibility to grow those revenues as best they can with proactive initiatives like site personalization. This is where we often see the most success with our clients. Being motivated by revenue vs. just learnings, will result in a more focused approach to revenue optimization, and in many cases a predictive campaign is more effective than an A/B test in maximizing revenue for your site and user experience.

Please keep in mind that while these are 10 preferred conditions we believe improve the likelihood of your predictive campaigns being successful, this full list should not serve a prerequisite or requirements before you launch your first or next predictive campaign. Typically, if you can align on 2-3 of these conditions you are often in good shape to see success.

The goal of this list was to share with you lessons learned and what to look out for as you continue to grow and expand your personalization programs and are ideally incorporating predictive campaigns more often into your portfolio of marketing and personalization initiatives

Getting Started

Now that you know what to look out for when designing and prioritizing your predictive campaigns, let us know how we can help you maximize your personalization initiatives.

We’d love to hear from you! You can contact us here: https://www.funnelenvy.com/contact/

When to A/B Test and When to Use Predictive Bandits

For the majority of marketers, when you talk about CRO and optimization programs, people immediately jump to A/B testing and assume that is the primary tactic and is the only tactic to demonstrate success for their programs and initiatives.

While A/B testing is a critical tool in your optimization program it shouldn’t be the only option on the table. While A/B testing is a valuable solution it is not a one size fits all solution for all your business challenges.

With advances in new data capabilities, marketing technologies, and real-time computing power in recent years, there are now more ways to solve for the same optimization problem.

For this discussion, we want to focus on two different techniques.

A/B test – A controlled experiment where traffic is split across 2 or more experiences and visitors are randomly assigned an experience and they stay in that experience for the duration of the test. At the end of the test, you determine a single winner based on which experience generated the best outcome for your primary KPI based on a predefined sample size/test duration and ideally reaching a specific statistically significant threshold. You then stop the test and full-scale the single winner for all traffic either directly in the testing platform or hard-coded into your CMS/platform.

Predictive Bandit – An experiment where traffic splits are not equal and where visitors are not randomly assigned. In these campaigns, a machine learning model predicts what is the best experience to serve a user either based on current + historical performance (multi-arm bandit) or based on current/historical performance + the user’s profile (contextual bandit). Like an A/B test, you can run the predicted experience vs. control to determine if a statistically significant uplift is achieved. But unlike an A/B test, the expectation with bandits is that it can run ongoing.

Now that we better understand the two techniques let’s explore reasons why both have a place in your optimization strategy.

Pros and Cons of A/B Tests

A/B testing is embraced in the analytics community as the smart and scientific way to measure the impact of change. Controlled experiments are used in testing the effectiveness of new medicines, academic/scientific studies, and of course in marketing. A/B testing is not new to marketing. It’s been done for decades in the direct mail/direct response world. And has become the recommended way to test changes on your website and mobile app assuming you can convince your stakeholders that testing is needed within your organization (it’s 2020 but still some organizations resist).

PROs:

  1. Quantify the impact of changes on your site. Don’t leave change to chance, measure and quantify the positive and negative impact of changes in experience with confidence.
  2. A/B tests are well understood in our industry. And for the most part well understood across an organization. Statistics may not be, but in general most folks understand the approach. In the end, if everything goes according to plan you have a clear outcome; either control won, or one of the challengers is declared the winner.
  3. Clear learnings. Related to the 2nd benefit of being well understood, benefits of an A/B testing is not just impact on business results, it’s shared learnings of what may or may not work for your site/business. With A/B tests you ideally gain a better understanding of your visitors and customers.
  4. At scale, you can create an organizational culture of experimentation. This change in culture leads to more creativity, risk taking, and better data driven decision making. Typically organizations that test more, make better and more informed decisions for their organization.

CONs:

  1. Success rates will vary, but are often low. Industry averages place the average success rate of A/B tests at 30-40%. So you have to expect that the majority of the time you are not going to end up with a new winner. Even within the winners, even fewer are high impact wins. The one hidden benefit to this, is that it makes the argument of why A/B testing is so important. If we didn’t test, the majority of the time those great ideas we think will perform better, actually perform worse or have no impact.
  2. One size fits all and the winner takes all. When you analyze your typical A/B test you will often see at a segment level, different groups convert very differently across the variants. At the end, you pick the variant that was best on average across all your traffic and full scale that one. But in doing so you do leave some money on the table, as the winner will not be the winner of all segments.
  3. Results can and do change over time. If you look at performance trends over the life of the test, it’s often the case where test results trend up and down over time. Guess what, that data reality continues to occur even after you hard code and full scale the winner. You can lock in the experience but you can’t lock in the results going forward. Results will continue to change, and a decent percentage of the time, as tests continue over time you experience a regression to the mean, where results start to flatten out. After all, if you run the same test twice, you will seldom get the same results. This is why, for many programs when you full scale that winner you usually don’t experience that lift ongoing. It still is a better way to make decisions, but as we know audiences and behaviors change. Seasonality can also be a factor.
  4. You sacrifice business value for concrete learnings. By design, an A/B test is designed first and foremost to generate a solid learning. Usually you are willing to sacrifice short term uplift if a variant does better than the rest, and willing to suffer through some short term downside if a variant clearly underperforms. You are willing to accept a sub optimal impact on revenue during the duration of the test because you are prioritizing a clear test result for short term business benefits.
  5. You need traffic. Not every organization can run A/B tests. Sufficient traffic and conversions are required to reach a statistically significant outcome. Not all sites have that, especially in B2B for example.
  6. Operational costs can run high. From selecting a testing platform, bringing on web developers, additional creative, and data analysis, the tool and people costs can be meaningful. Plus, there are the operational costs of introducing more time and resources to launch something, and the occasional negative costs from broken experiences or flawed tests. All marketing teams and programs incur people, tools, and operational costs, and testing is no different and often carries more cost overhead.

While I go into more detail on the cons with A/B tests, I do some because some of the cons are less understood. Still, when done right, A/B test is worth the effort and the benefits will far outweigh the cons.

But as I hopefully outlined above, A/B tests do have different strengths and weaknesses.

Pros and Cons of Predictive Bandits

Bandits have gained more popularity in marketing in recent years as computing power has advanced to the point where real-time machine learning predictions can be applied in more and more marketing technologies and for more use cases.

We have seen the trend emerge in digital advertising where bid and creative recommendations are often driven by machine learning decisions.

However, when it comes to site optimization the adoption and acceptance of bandits as a proven technique is still in the early days. For many programs, experience and maturity with these techniques are still low. Even though some of the most popular testing platforms have included those capabilities for a number of years. Let’s discuss why.

PROs:

  1. Bandits are by design biased toward business outcomes. Unlike A/B tests which are designed to maximize time to a clear learning. Bandits are typically designed to maximize the business outcomes at the expense of clear and precise learnings. The algorithms typically send more traffic to experiences that perform best, and route traffic away from experiences that underperform as a whole or for a specific audience segment.
  2. Bandits use all the data to your business advantage. In A/B tests you may use data to inform and drive your test hypothesis, but when it comes time to setup your test, you typically set it a controlled randomized experiment where traffic is split evenly and your visitors are randomly assigned to one of the variants during the duration of the test. Again, this is absolutely the right way to run a controlled experiment to generate your best chance at a concrete learning. However, this also means you are ignoring the data and trends that live within a test and the micro trends of which traits and segments perform best for each variant. With Bandits, machine learning is consuming and using all available data on the variant performance and the user to determine the best possible user experience for that user to drive the optimal business outcome. It’s far from perfect, but you are not leaving the outcome to chance. You are using all the data you know about the user and situation to make the experience decision to maximize your business outcomes.
  3. Bandits adjust to changing trends and behaviors. With Bandits, the intention is for it to be always on and constantly adjusting to the latest performance trends of the campaign. Unlike an A/B test where you pick one winner and lock it in, a Bandit can adjust as results change over time, and minimize any loss in performance and capitalize on any shifts in winning experiences.
  4. Bandits can work with less traffic. Because you are optimizing for revenue instead of learnings, you can still benefit from bandits even though you have less than optimal traffic to run a clean test.
  5. Bandits work well in time sensitive situations. Popular content and Black Friday sales are good examples. By the time an A/B test gives you the right answer the opportunity may have passed you by. With Bandits, it reacts in real-time to the trends and that allows you to take advantage of short term and seasonable situations.

CONs:

  1. Bandits are hard to interpret, understand and communicate. While A/B tests are well understood, bandits are not. The fact that decisions are controlled in real-time by a machine learning model vs. set A/B splits, means users do not have certainty about why an experience was shown or control over the traffic rotation.
  2. Bandits offer limited learnings. A/B tests by their design as controlled experiments are designed to produce learnings. Bandits will typically sacrifice clarity of learnings when it comes to which experience works best. You can often be informed about what segment or what feature influenced the bandit model decision, but it’s not as clear cut on what is the final absolute winner. Often you are A/B testing the technique, do bandits outperform control or an A/B test for this page/site. But with bandits it’s harder to generate clear learnings on winning experiences. In the end you are optimizing for revenue and other business outcomes at the expense of a clear isolated learning. As an organization you need to be aware and accept this reality and your stakeholders need to accept this reality too.
  3. Bandits can provide an inconsistent user experience. As we called out earlier, to maximize business outcomes, bandits by design do not make experiences sticky to the user, and will often serve a different experience to the same user if the data suggests it will result in a better outcome. While this maximizes revenue, it can lead to experiences being more dynamic and changing for a given user. While dynamic websites are generally considered a positive, because it is not very common in 2020 (surprisingly), this dynamic approach to site experience can be a drawback for some users.
  4. Bandits require more thoughtfulness on experience design. While you can A/B test most things from layout, to copy, to color, to offers, the same isn’t always true for predictive bandits. Bandits work best where you have varied user intent and ideally varied offers and outcomes. Bandits are more effective if they can predict for more outcomes for a larger range of user intent. If you are running CTA color and size changes you are better off running a traditional A/B test. There is likely less signal in the data in terms of user preference of a button color/design and the results will likely hold true on average for most users. That is why bandits work better when intent varies across the visitors and potential offers displayed can also vary.

When to Use A/B Tests and When to Use Predictive Bandits

Now that you know the PROs and Cons for A/B tests and predictive bandits let’s talk about some practical applications of each and when you should one over the other.

When A/B Tests are Recommended

  1. When you need a clear learning and more certainty on the final decision. Examples here would be a pricing or homepage test.
  2. When you want to lock in a specific design or experience for all. Examples here are things like a new homepage design, a new form layout, or say our site wide CTA treatments. Here there is more operational value in locking in that specific winner and then optimizing further on that.
  3. When intent and offer options are narrow in scope. When intent for all users is similar the offer is the same for all, then A/B testing usually works better than predictive bandits. An example would be optimizing for the final cart checkout page. Everyone there is ordering (or not) and the offer is a checkout (or not).

When Predictive Bandits are Recommended

  1. When you want to maximize revenue. If you have aggressive revenue goals then bandits are a better tactic to get you there as you are using all the data available to make the optimal revenue decision. A good example here is presenting the right offer on a homepage hero or promoting a specific price/package on the pricing page. In those situations, the general site is the same but you are predicting the best offer and experience to spotlight to maximize revenue.
  2. When intent and offer options vary broadly. Predictions work better when there’s a wider range of users and intent and a wider range of offers and outcomes to present. This is where the value of machine learning and crunching dozens and hundreds of data attributes in real-time is helpful. Good examples here can be homepage, brand landing pages, and pricing/plan pages. In these scenarios intent and options/offers presented can vary.

Now that you know the strengths and benefits of both tactics I think you will agree that both tactics should be part of your optimization and personalization toolkit.

Running A/B Testing and Predictive Campaigns in FunnelEnvy

With the FunnelEnvy platform we give you the ability to run both A/B and predictive campaigns and apply the right tool for the job. Unlike other platforms, both tactics are available as part of our standard license.

To get started you just create a new campaign and then select the preferred template option between Predictive and A/B testing.

If you selected an A/B testing then the “Campaign Decision” section will default the settings typical of an A/B test.

As you can see A/B testing is the decision type defaulted, and the “Persistence Variations Decisions” feature is checked so the same experience is served to the user across sessions. Lastly, you can adjust the traffic allocation to determine what percentage of traffic enters into the test. By default and typically it’s set to 100%.

If you were to select a Predictive campaign template, the decision type would instead default to Predictive as seen in the screen below.

In addition, the “Persist Variation Decisions” feature is not selected by default. As mentioned earlier, we typically see improved revenue performance when the campaign can rotate different offers over time to the same user. That makes sense, as a user may not respond to an initial offer, but maybe convert when presented another. But we do recognize their legitimate reasons to also make predictive decisions sticky, especially when running in locations like your pricing and plans page. And like with the A/B template, you also can set traffic allocation from 0-100%.

What is specific to Predictive campaigns is the predictive experiment options. Here you have two choices to make. The first choice is what percentage of traffic should be included in the predicted group vs holdback. The experiment section is where you can test the incremental value of a predictive campaign by holding back a portion of your traffic as the control.

In the screen below we have set the holdback to 50%. This will result in 50% of the traffic being assigned to the predictive experience and the remaining 50% gets served a control. This allows you to test the incremental value of running a predictive campaign on your site.

Typically, we recommend the client start a new campaign as a 50/50 campaign and as they see the predictive campaign outperform the control we then recommend full scaling the predictive campaign to 100%.

The second option you have with a predictive campaign is to determine the composition of your holdback group. You can choose between assigning the variation a specific variation, like baseline control, or you can set holdback to random in which case the holdback will run as an A/B test for that traffic across all the available variants. The “Random” option is useful if you want to determine the incremental uplift in running a predictive campaign vs. an A/B test.

As you can see, our predictive campaign setup still allows you to isolate and measure the incremental impact of a predictive bandit approach by running it as a controlled experiment.

Our typical client will often run both types of campaigns simultaneously on different areas of their site and targeted for different audiences. Rather than having to choose one tactic or another, FunnelEnvy clients have the best of both worlds.


Getting Started

As you can see, A/B testing and predictive bandits are both valuable tactics in your optimization/personalization programs. Like any tactic you need to pick the right tool for the job.

If you’re not yet using FunnelEnvy but are interested in personalizing your website using a combination of A/B tests and predictive campaigns we’d love to hear from you! You can contact us here: https://www.funnelenvy.com/contact/

 

How to Effectively Personalize your Website using Account Data for Anonymous Traffic

Unlike consumer marketers, B2B revenue teams often reason about their market at an organization or account level. That may be based on specific named accounts or company size, industry or other related firmographic attributes.

Much of the traffic coming to the website is of course, “anonymous” meaning that they haven’t shared any contact information. There are however a number of vendors that sell firmographic data that can deliver this information even for anonymous traffic. This is possible because every web request must contain the source IP address and these vendors have mapped many (though far from all!) of them to specific accounts.

Traditionally used for sales teams, some of these solutions can be integrated in real-time into the website. This opens up some interesting opportunities to improve the traditionally static experience – such highlighting industry specific offerings, enterprise plans, or even targeting customers of competitors and highlighting differences.

A word of caution is warranted here, however. This approach to personalization is an investment that goes beyond just the vendor costs and we’ve seen a lot of campaigns where the return did not materialize. So let’s go into the more effective use cases, selecting a vendor and how to integrate it into FunnelEnvy audiences and predictive campaigns.

Beware Vanity Experiences

Personalization over email is useful because it helps the recipient understand that it’s not a mass-emailing robot on the other end and that the message has been tailored to them. Website visitors have different expectations and what works over email can cross the line or be creepy on site.

Website personalization is most effective when it helps the customer by presenting them with an offer (next best action) that’s most relevant for them in their journey. That offer could be content, starting a free trial, contacting the sales team or whatever is both most relevant for them and maximizes their likelihood of conversion.

Although it may sound obvious, where we’ve seen campaigns underperform with reverse IP personalization is where it doesn’t meet these customer-centric goals. Consider the following:

  • Injecting the account name in the copy – Doesn’t the visitor already know where they work?
  • Crafting experiences based on visitor industry when there’s no industry specific features to the product or service.
  • Serving pages that are specific to individual named accounts. The volume is generally too low to make a difference and again the customer already knows where they work!

These sorts of experiences are self-serving and what we call Vanity Experiences.

Vanity experiences, including one on FunnelEnvy.com. It’s no coincidence that these companies also sell the products that let you do it!

On the other hand reducing friction and targeting a more relevant direct-response offer based on firmographic data can be very effective. In some cases you could even skip steps in the journey – such as eliminating the pricing page for enterprise visitors.

Firmographic Vendor Considerations

There is no vendor that will be able to match all of your traffic, in fact match rates are typically in the 10-30% range. A variety of factors can influence that, but probably the most important factor is who you’re selling to and the types of accounts that are visiting your site.

Large enterprises, universities and governments often secure well known blocks of IP addresses which are much easier to identify. On the other hand smaller businesses often use shared office space and Internet Service Providers (ISPs) which make identifying them much harder. If you’re primarily selling to small business it’s unlikely that you’ll match enough accounts for this to be a cost-effective strategy.

If however you do sell to larger organizations then even if you have a low overall match rate it could still be worth pursuing. The effective match rate for the larger accounts is likely to be much higher and most B2B companies generate from enterprise customers.

Aside from the match rate the actual performance (time to return a match response) is an important factor on the website. If the integration is too slow the page may render before you have an opportunity to personalize, making for a sub-optimal “content flicker” on the all important first page view.

Activating Reverse IP Data with FunnelEnvy

FunnelEnvy’s platform has out of the box support for three leading firmographic vendors – Clearbit, Demandbase and Kickfire. Once you’ve selected one of these vendors, integrating into the platform and using the data for segmentation (audiences) and directly within 1:1 predictive campaigns can be done in a few minutes and requires no IT involvement.

Data Source Setup

Each of these firmographic vendors is a Data Source in FunnelEnvy, and activating any one of them is as simple as going into the integrations, locating the appropriate data source and clicking on the activate checkbox.

Once activated and saved the data source will appear in the list of active integrations.

Since these data sources return data in the browser the FunnelEnvy javascript snippet must also be present along with the reverse IP vendor snippet. Data collection happens automatically without additional setup and can be used immediately for creating audiences and in predictive campaigns.

Audiences

In the Audiences section of FunnelEnvy you can create conditions based on the activated provider. The rule builder will include all of the individual data attributes from the provider and rules can be AND or OR’d together for flexibility.

As with any of our data sources conditions can be combined with other sources (behavioral, Marketo, etc) to create audiences defined from multiple data sources.

From the audience builder you may want to report on visitor behavior from a particular firmographic segment (e.g. SMB or Enterprise visitors). This can be accomplished within the audience through the Google Analytics integration by setting either a custom dimension and / or sending an event.

For personalization Audiences can be used within the targeting section of campaigns operating in both A/B/n or predictive mode. If an Audience is selected in the campaign targetings visitors must meet both the page and audience targeting conditions to be eligible to see a variation.

Even if you don’t create any audiences the underlying firmographic data is used in our real-time predictive campaigns.

1:1 Predictions with Firmographic Data

Firmographic data sets are excellent for our predictive campaigns because they’re generalizable and often highly correlated to experiences and outcomes. There are only so many audiences you’ll be able to create but every data point from these providers can be used by our algorithms to predict which experience is most effective on a 1:1 visitor basis.

You can see the effect of this in our campaign signal report which shows how strong the predictive signals are and how much they correlate to uplift and revenue. Individual firmographic attributes are often highly represented in successful campaigns.

A great example of this in action is a homepage campaign with different variations for the SMB or Enterprise journeys. Since reverse IP data is available even on the first page visit, our model can identify patterns in the visitor profile and serve more relevant experiences to your customers.

Getting Started

It is possible to improve upon static website experiences with reverse IP firmographic data and help your customers while at the same time increasing your conversion and revenue KPIs. If you’re a FunnelEnvy customer and want to explore firmographic personalization for anonymous traffic let us know.

If you need help selecting a vendor we can help with that too.  You can contact us here anytime: https://www.funnelenvy.com/contact/

 

Personalizing the Revenue Journey with Segment Data

Accelerate your customers journey to revenue with FunnelEnvy, now powered with Segment.

Segment helps their customers instrument, store and unify data about their visitors and the actions they take all the way to revenue. Now with the FunnelEnvy Segment integration you can deliver personalized, 1:1 website experiences and optimize for revenue using all of that rich customer data that you’re already collecting in Segment.

What does this mean? Segment customers will be able to run more effective campaigns using better data with less custom code required.

Check out our integration on Segment

(more…)

The Importance of Context with Marketing Experiments

By now most marketers are familiar with the process of experimentation, identify a hypothesis, design a test that splits the population across one or more variants and select a winning variation based on a success metric. This “winner” has a heavy responsibility – we’re assuming that it confers the improvement in revenue and conversion that we measured during the experiment.

The experiments that you run have to result in better decisions, and ultimately ROI. Further down we’ll look at a situations where an external validity threat in the form of a separate campaign would have invalidated the results of a traditional A/B test. In addition, I’ll show how we were able to adjust and even exploit this external factor using a predictive optimization approach which resulted in a Customer Lifetime Value (LTV) increase of almost 70%.

(more…)

By |2020-01-28T08:41:14-08:00January 28th, 2020|Conversion Rate Optimization, B2B, Experimentation|0 Comments

Optimization Pitfalls to Avoid In 2020

The Activity Trap

Sales reps aren’t paid on the number of calls they make, and real estate agents don’t get commission on the number of showings they do. Activity does not equate to outcome, and conflating the two can have really expensive implications.

The same story applies to marketers. We seem to spend a lot of effort fostering cultures of activity rather than outcomes.

(more…)

Personalizing the Revenue Journey with Segment Data

Accelerate your customers journey to revenue with FunnelEnvy, now powered with Segment.

Segment helps their customers instrument, store and unify data about their visitors and the actions they take all the way to revenue. Now with the FunnelEnvy Segment integration you can deliver personalized, 1:1 website experiences and optimize for revenue using all of that rich customer data that you’re already collecting in Segment.

What does this mean? Segment customers will be able to run more effective campaigns using better data with less custom code required.

Check out our integration on Segment

(more…)

How Not Picking an Experiment Winner Led to a 227% Increase in Revenue

By now most marketers are familiar with the process of experimentation, identify a hypothesis, design a test that splits the population across one or more variants and select a winning variation based on a success metric. This “winner” has a heavy responsibility – we’re assuming that it confers the improvement in revenue and conversion that we measured during the experiment.

Is this always the case? As marketers we’re often told to look at the scientific community as the gold standard for rigorous experimental methodology. But it’s informative to take a look at where even medical testing has come up short.

For years women have been chronically underrepresented in medical trials, which disproportionately favors males in the testing population. This selection bias in medical testing extends back to pre-clinical stages – the majority of drug development research being done on male-only lab animals.

And this testing bias has had real-world consequences. A 2001 report found that 80% of the FDA-approved drugs pulled from the market for “unacceptable health risks” were found to be more harmful to women than to men. In 2013 the FDA announced revised dosing recommendations of the sleep aid Ambien, after finding that women were susceptible to risks resulting from slower metabolism of the medication.

This is a specific example of the problem of external validity in experimentation which poses a risk even if a randomized experiment is conducted appropriately and it’s possible to infer cause and effect conclusions (internal validity.) If the sampled population does not represent the broader population, then those conclusions are likely to be compromised.

Although they’re unlikely to pose a life-or-death scenario, external validity threats are very real risks to marketing experimentation. That triple digit improvement you saw within the test likely won’t produce the expected return when implemented. Ensuring test validity can be a challenging and resource intensive process, fortunately however it’s possible to decouple your return from many of these external threats entirely.

The experiments that you run have to result in better decisions, and ultimately ROI. Further down we’ll look at a situation where an external validity threat in the form of a separate campaign would have invalidated the results of a traditional A/B test. In addition, I’ll show how we were able to adjust and even exploit this external factor using a predictive optimization approach which resulted in a Customer Lifetime Value (LTV) increase of almost 70%.

(more…)

A Culture of Optimization Eats Experimentation and Personalization for Breakfast

 

As marketers we could learn a lot from ants.

They don’t attend conferences, have multi-million dollar budgets or get pitched by the latest AI-based tech vendors. Yet over millennia they’ve figured out a radically efficient solution to an important and complex problem – how best to find food to sustain the colony.

This is no easy task. The first ant leaving the colony walks around in a random pattern. It’s likely he (foraging ants are always male) doesn’t find food, so he’ll return back to the colony exhausted. It’s not a completely wasted effort however, he (and every other ant behind him) will leave behind a pheromone trail that attracts other ants.

Over the course of time and thousands of individual ant voyages, food will (likely) be found. Ants that do find food will return immediately back to the colony. Other ants will follow this trail and, because pheromone trails evaporate over time, they’re most likely to follow the shortest, most traveled (highest density) path.

This approach ensures that the colony as a whole will find an optimal path to a food source. Pheromone evaporation also helps ensure that if the current source runs out, or a closer one is found, the colony will continue to evolve to the globally optimal solution.

It’s a classic optimization solution that maximizes a critical outcome as efficiently as possible, and one that has been studied by entomologists, computer engineers and data scientists. In the current B2B marketing environment it can illuminate where we’re spending our time and money.

(more…)

If you care about B2B conversions, stop producing content

Houston, we have a problem.

As enterprise focused B2B marketers, we have a problem.

We all agree that we want to grow traffic to our website, turn the traffic into leads and convert the leads into customers.

Yet, we have all blindly trusted the theory that, producing more content, showing product options, displaying more testimonials, and creating more case studies will get you a bigger pipeline.   

Let us be the first to refute this claim: more is not better.

In fact, with every additional piece of content or white paper you are killing pipeline. Why, might ask?

Because you are simply overwhelming your customers.

Explanation please!?

To illustrate this point, let’s talk about Cognitive Load.

Cognitive load refers to the total amount of mental effort being used in the working memory.

When we put irrelevant, unnecessary and distracting information in front of people, we fill up that working memory. The result is a decreased ability to absorb information, learn and ultimately make decisions.

While it may seem that having multiple content options on your website increases the likelihood that you will connect with your visitor, it actually has the opposite effect.

When customers are given too many content options you are forcing them to make decisions that take up their mental resources, derailing your chance at a direct path to purchasing.

Can someone say cognitive overload?!

Think about it this way: the most important factor in the design of a website is making it easier for customer to find what they want. Customers crave simple and easy navigation over anything else in regard to design.

most important factors in the design of a website

Most important factors in the design of a website

If your website presents multiple decisions for the user, you filled with decisions for users to make, you are not making it easy for them to find what they want!

Let’s break it down.

Let’s take a look at some examples unintentionally cognitive overload.

Content.

As marketers, we are producing too much content that is both expensive and unnecessary.

In reality, only a small portion of your content is necessary to help the customer move down the purchase funnel and it is our job as marketers to present that one (perfect) piece of content. Sadly, we are letting our customers down by allowing them to read irrelevant content and thus, introducing cognitive load.

produce more content

Produce more content

Calls to action.

While letting visitors chose from multiple CTA’s may seem like a great way to help customers find what they want, it actually leads them to confusion. Rather, you should be putting them on a specific path that you have identified as most effective for conversions.

Calls to Action

Multiple calls to action

Product options.

If you’re selling an enterprise focused product, it is likely that your website is showcasing all of the products and services that you offer. Again, this overwhelms customers. 

We should know about our customers well enough so that we are only showing the products that we believe (based on research) they are likely to buy! Don’t give them fifty options and hope their first selection is the one best suited for them.

analytic solutions

Analytic Solution

Case studies.

A customer only works in one industry; do not show them case studies from other industries where the use cases might be completely different. This content is irrelevant, distracting and increases cognitive load.

case studies

All case studies

Industry solutions.

We are asking customers to unnecessarily identify themselves. Having to go through a selection process, like the example below, does not inspire confidence that the software is well suited for a visitor’s industry.

industry solutions

Industry Solutions

  

Pricing options.

A multitude of pricing options is a perfect example of cognitive overload. We are overwhelming our customers with pricing options to the point where they don’t know which option to choose.

Target enterprise accounts shouldn’t see basic pricing tiers. Similarly, SMB’s shouldn’t see enterprise offerings. This substandard experience increases friction and reduces conversion rates!

pricing options

pricing options

 

It is time to stop – your are overwhelming your Customers.

So, what’s the solution then? How do we make sure customers aren’t overwhelmed with cognitive overload?

It’s simple: reduce content and only show the most relevant information.

Calls to Action

Calls to Action

 

Keeping this in mind, what if instead of showing everything to all of our visitors, we only showed the most relevant and effective calls to action?

Logos and Testimonials

Logos & Testimonials

  

What if we show the logos and testimonials that were most relevant to customer needs or highlighted the testimonials that most reflected their pain points?

pricing plans

Pricing Plans

What if we focused on and only showed the pricing that was going to be relevant for the given account and the features of those plans that were going to meet their needs?

customer experience

Customer Experience

What if we only showed customers relevant experiences based on what we knew about them? 

Doing things right

Doing things right

Doing this right has real and meaningful implications. 94% of buyers in a Demand Gen survey choose the winning vendor because that company demonstrated a stronger knowledge of their needs.

In an Accenture survey, half of B2B customers already expect improved personalized product or service recommendations. In fact, 65% of business buyers are likely to switch brands if a company doesn’t make an effort to personalize communications to their business.

The takeaways.

The more options you give customers, the more cognitive load you put on them. The result is a filled-up working memory and hindered ability to make decisions toward purchasing a product. 

Instead of producing more content, focus on showing the single experience that will resonate most with your customers!

 

Go to Top