Use Google Analytics to Understand and Convert Leads

You’re already leveraging Google Analytics to drive website strategy. But you’re missing crucial context about the single most relevant factor for predicting and influencing visitor behavior: their stage in the buying process.

You use this context in your marketing automation campaigns – different messaging and different offers based on content consumed and lead score. You exclude existing customers from paid acquisition campaigns. But when you look at website data in Google Analytics, all of those visitors are lumped together.

The revenue opportunity for an existing customer is different from a new visitor. And the content and offers that are relevant to a new visitor are redundant for a lead in your marketing or sales pipeline.

Let’s take a look at how you can get data on leads into your Google Analytics reporting views, and what to look for once you’ve got it.

What’s a lead?

Leads are visitors who have identified themselves, but haven’t paid you money. They might be on a free trial, or they might have filled out a form to access gated content.

Getting lead data into Google Analytics

First, create a Custom Dimension to hold data about visitor stage.

Open up your Google Analytics account and click on the admin section.

Screenshot of 'Admin' option in Google Analytics

Next, go the “Property” panel and click on “Custom Definitions” then “Custom Dimensions.”

1-_9pJ8vS8yeeaJ8D0qW_Btg

(more…)

By |2025-05-12T04:36:51-07:00July 21st, 2020|Full-Funnel Optimization|0 Comments

Attributing Campaigns to Sales Opportunities in Marketo

I’m writing this because it’s the year 2020 and we’re still having trouble attributing onsite campaigns/testing to Marketo MQLs/Opportunities/Revenue.

It is meant primarily for technical practitioners as a quick-start guide, and is so simple you’ll probably be able to get it hooked up today.

As a result, this guide is very tactical in nature, with the end goal of helping you answer the question ‘What impact is my onsite campaigns/tests having on revenue?”.

What solution is for you

If you happen to already be using FunnelEnvy, then enabling our Marketo Integration is all you need to do. This not only enables attribution, but unlocks additional functionality, including the ability to target onsite users based on their Marketo classifications. It’s a paradigm shift in how you think about marketing automation and I encourage you to read our recent article to find out more.

If on the other hand you’re still using a web-based testing tool like Optimizely or Adobe Target to run your onsite campaigns then read on, as this guide is primarily for you. But first, a cautionary tale:

Always optimize for downfunnel outcomes, not onsite vanity conversions

Recently, FunnelEnvy ran a very visible experiment via Adobe Target for a well known SaaS company. Early results were trending downward and web analytic data showed a sharp downtrend in incremental conversions/leads. Talks were had about ending the test early as this was potential a huge economic impact.

Luckily, the attribution module you are about to see had already been installed, allowing us to query Marketo directly, telling a different story. We were able to calculate that the test was actually responsible for a significant increase in annual recurring revenue! Without this additional layer of information, we likely would have moved on, but.

But back to our discussion:

The Solution

We’re going to break this guide up into two main parts:

Part 1: Create/maintain a running list of campaigns/tests a user has seen

Part 2: Pipe this list into Marketo

Create/maintain a running list of campaigns a user has seen.

In order to automate this as much as possible, we’re going to create a centralized module that fires on every page. This allows us to just install it once (via GTM, Launch etc.) and not have to make any edits when new campaigns/tests are launched.

You’ll need to take advantage of Adobe’s Response Tokens or Optimizely’s client side object if you want to go this route. Users of other platforms like Google Optimize will need to follow a more manual approach, adding the module to every new campaign/test. The principles will stay the same, it just requires a bit more to upkeep.

The Code
[javascript]try{
(function () {
// Callback for Adobe Target response tokens
document.addEventListener(adobe.target.event.REQUEST_SUCCEEDED, function(e) {
var tokens = e.detail.responseTokens;

if (isEmpty(tokens)) {
return;
}

var uniqueTokens = distinct(tokens);
})();
} catch(err) {
console.log(err);
[/javascript]
The start of our module – An Adobe Response Token listener
[javascript] //Cycles through each token
uniqueTokens.forEach(function(token) {

var cookieName = token["activity.name"] + ‘ ‘ + token["experience.name"];

// Slugify the cookie name.
cookieName = cookieName.toLowerCase().replace(/\\((evar.*?)\\)|\\[(.*?)\\]/g, ”).trim().replace(/[^a-z0-9]+/g, ‘-‘);
});
[/javascript]
Next we cycle through each response (there is 1 per active campaign) slugifying the campaign/variation name.
[javascript]/*
Find the existing cookie if it exists. Adds new campaign values to the front of the cookie
*/
var existingCookie = _satellite.cookie.get(‘marketoCookie’) || ”;

if (existingCookie.indexOf(cookieName) === -1) {
var newCookie = cookieName + ‘|’ + existingCookie;

_satellite.cookie.set(‘marketoCookie’, newCookie, {expires: 30});
}
[/javascript]
Since this module runs on every page load, we also need handle duplicates in the cookie name
[javascript]var checkLength = function checkLength() {
if (newCookie.length > 2000) {
newCookie = newCookie.split(‘|’);
newCookie.pop();
newCookie = newCookie.join(‘|’);

checkLength();
}
};
checkLength();
[/javascript]
Finally, Marketo inputs (you’ll set this up soon) have a character limit of 2000 characters, so we truncate older campaigns.

Here’s the reusable function in it’s entirety:
[javascript]try{
(function () {
// Callback for Adobe Target response tokens
document.addEventListener(adobe.target.event.REQUEST_SUCCEEDED, function(e) {
var tokens = e.detail.responseTokens;

if (isEmpty(tokens)) {
return;
}

var uniqueTokens = distinct(tokens);

//Cycle through each token
uniqueTokens.forEach(function(token) {

var cookieName = token["activity.name"] + ‘ ‘ + token["experience.name"];

// Slugify the cookie name.
cookieName = cookieName.toLowerCase().replace(/\\((evar.*?)\\)|\\[(.*?)\\]/g, ”).trim().replace(/[^a-z0-9]+/g, ‘-‘);

/*
Find the existing cookie if it exists.
Adds new campaign values to the front of the cookie
*/
var existingCookie = _satellite.cookie.get(‘marketoCookie’) || ”;

if (existingCookie.indexOf(cookieName) === -1) {
var newCookie = cookieName + ‘|’ + existingCookie;
/*
If above the 2000 Marketo input character limit,
truncate old campaign values
*/
var checkLength = function checkLength() {
if (newCookie.length > 2000) {
newCookie = newCookie.split(‘|’);
newCookie.pop();
newCookie = newCookie.join(‘|’);

checkLength();
}
};
checkLength();
_satellite.cookie.set(‘marketoCookie’, newCookie, {expires: 30});
}

});
});

function isEmpty(val) {
return (val === undefined || val == null || val.length <= 0) ? true : false;
}

function key(obj) {
return Object.keys(obj)
.map(function(k) { return k + "" + obj[k]; })
.join("");
}

function distinct(arr) {
var result = arr.reduce(function(acc, e) {
acc[key(e)] = e;
return acc;
}, {});

return Object.keys(result)
.map(function(k) { return result[k]; });
}
})();
} catch(err) {
console.log(‘Error in Target Marketo Cookie’);
}
[/javascript]
A simple solution – copy and pasteable.

Create a hidden input field on all Marketo forms

Lastly, we need a way to get the running list into Marketo.

Marketo has out of the box functionality to create an input to ingest a cookies value. All you need to do is add this input to your forms, specify the cookie name (in our case, marketoCookie) and the rest happens by default.

You’ll now have a historical list of campaigns/tests an individual user saw whenever they submit a Marketo form.

Easy peasy.

By |2025-05-12T04:36:49-07:00March 22nd, 2020|Full-Funnel Optimization|0 Comments

Real-Time Personalization with Marketo and FunnelEnvy

For many organizations, Marketo serves as the real-time customer database for marketing. Unfortunately, for most organizations today this rich intelligence living in Marketo is not being leveraged to drive personalized user experiences across your site which is one of the most valuable opportunities with this data.

The good news is that when it comes to personalizing with Marketo, you don’t have to be limited to just personalizing your emails and Marketo forms. You can actually use all that valuable customer centric Marketo data to drive your website personalization programs.

Why might you want to do this? Instead of showing everyone the same lead capture experience, you could show prospects who have already filled it out more product content. Or show existing customers opportunities to expand. Maybe even segment your experiences and customer journey by company size or industry.

With FunnelEnvy’s Marketo integration you can use your rich Marketo data in real-time to deliver personalized experiences across your site.

Setting up the Marketo Integration in FunnelEnvy

Within the FunnelEnvy user interface you can activate and configure the Marketo integration. FunnelEnvy fetches Smart Lists periodically from Marketo and automatically keeps these updated with Marketo. Configuring the integration also lets you setup offsite goals triggered by Marketo webhooks such as Marketing Qualified Leads (MQLs).

The Data Filtering interface lets you choose which fields to import, and exclude PII or other data based on your compliance policies.

Typically these four steps are done by the Marketing Ops team that manages the Marketo instance:

  1. Activate the Marketo data source.
  2. Authorize FunnelEnvy to access Marketo
  3. (Optional) Configuring Data Filtering
  4. Selecting Smart Lists to Import

Step 1: Find and activate Marketo under the Integrations settings. You should see it as an activated Data Source.

         

Step 2: Authorize FunnelEnvy to access the Marketo REST API with API keys.

Step 3: Optionally configure data filtering rules. When fetching lists FunnelEnvy will only import lead attributes that are selected.

Step 4: Select Smart Lists for Import. Assuming your API credentials in Step 2 were correct, you should see a list of Smart Lists available for import. Note that it may take up to an hour for this list to reflect any recently added Smart Lists.

Once you’ve configured the Smart Lists for import you’re done! FunnelEnvy will refresh the lists every few hours, retrieving leads and refreshing the local copy of Marketo data, which is then available immediately for audiences, predictive campaigns and offline Marketo-triggered goals.

More details on setting up the integration can be found in our knowledge base article.

Using Marketo for Site Personalization in FunnelEnvy 

Once you’ve configured the Marketo Data source you open up a number of valuable personalization use cases. Below are three ways you can use FunnelEnvy and Marketo together to better target, personalize, and measure your personalization initiatives.

Target Experiences and Offers using Lead Attributes and  Smart Lists

Stop serving a static one size fits all website experience to all your visitors. Want to personalize your site experience only for prospects, or to specific accounts, or members of specific campaigns? 

With FunnelEnvy you can create very rich audiences that can be built off Marketo data and that can also be used as part of more advanced audience segments that combine Marketo data with firmographics and/or real-time user behavior as well.

In the condition builder interface you have access to all of the Marketo lead fields that were imported, and can define logical conditions based on them.

These conditions can also be combined with other data sources. In the audience screenshot below we’re combining a Marketo condition with a user’s behavior (but this could also be Demandbase, Clearbit or any of the sources we support). 

And just like any of the FunnelEnvy audiences, these can be used for targeting within predictive campaigns or A/B Tests:

This flexibility allows you to setup a dynamic “always on” personalization strategy that targets the right user segments in real-time based on that visitor’s stage and their relationship with you.

Personalize Experiences at a 1:1 Level with Marketo Data

While targeting is a powerful first step in executing your personalization strategy, the more powerful opportunity is to use all that rich user data to predict the best experience to serve each visitor. 

Choosing in real-time which experience to serve each user based on their full user profile truly allows for 1:1 marketing. That is where the personalization magic really happens.

FunnelEnvy uses machine learning to predict which experience will likely convert best based on all the data we see for that user, including their Marketo data and based on the history of how similar users converted over time.

And unlike A/B tests where a specific experience is randomly assigned, or rules based personalization where you fix a specific experience to an audience, FunnelEnvy allows you to take advantage of all the data you have on that user and serve the experience mostly likely to convert for that user.

This allows you to avoid the manual analytics effort of trying to identify and capitalize on all the possible experience and segment combinations that perform best. As a marketer you can stay focused on the message and offer and allow the algorithms to optimize the segment/experience matches.

As the report below shows, we are scoring/weighing the effectiveness of every attribute we see for every user by experience.

Here, Marketo audience data along with all the other behavioral and firmographics data is used to predict the best possible outcome for each and every user and experience combination.

This allows us to use all the data to our advantage and serve the right experience that will most likely result in revenue. 

The best part is that there’s no additional setup required here. Once we have the Marketo data within our profiles we’ll use it as long as the decision mode on your campaign is set to “Predictive”.

Measure and Attribute Personalization Campaigns by Revenue (not Form Fills)

With personalization, one of the bigger challenges is being able to measure the program’s contribution to revenue and business outcomes. 

It can be done, but often requires integrating data sets or pulling reports from multiple systems and generating manual reports after the fact.

WIth FunnelEnvy, once you set up your important online, MQL, and any other revenue goals you then start tracking and attributing success to each personalized experience. Below is an example where we created a MQL goal based on a Marketo List and assigned a specific MQL value to it.

To setup this, ensure that the Marketo Data Source is activated and configured and create a new individual goal. Under “API Triggering” you’ll should see an option for Marketo. Once selected, this is the URL that your Marketo instance will hit via a webhook to trigger the goal conversion. More details on setting up these webhooks is available in our knowledge base article.

Once that’s done the Marketo goal will shows in real-time in our campaign reporting dashboards.

It now becomes much easier to tell the story of how specific tests or personalized experiences are driving down funnel goals like MQLs, SQLs, opportunities, and deals won in addition to top level goals like trial signups, demo requests, or engagement.

This makes it much easier to attribute the positive impact personalization has on the organization’s revenue outcomes. Now instead of talking about form completes you can talk the language of sales which is revenue.

Getting Started

As you can see, integrating Marketo into your personalization program is very straightforward and can unlock some very valuable use cases and capabilities. The best part with this approach is that there is no custom development or IT involvement to get this up and running. You can setup the integration and be live with your first campaign on the same day.

If you’re not yet using FunnelEnvy but are interested in personalizing your website to Marketo Leads and Contacts we’d love to hear from you! You can contact us here: https://www.funnelenvy.com/contact/

The Importance of Context with Marketing Experiments

By now most marketers are familiar with the process of experimentation, identify a hypothesis, design a test that splits the population across one or more variants and select a winning variation based on a success metric. This “winner” has a heavy responsibility – we’re assuming that it confers the improvement in revenue and conversion that we measured during the experiment.

The experiments that you run have to result in better decisions, and ultimately ROI. Further down we’ll look at a situations where an external validity threat in the form of a separate campaign would have invalidated the results of a traditional A/B test. In addition, I’ll show how we were able to adjust and even exploit this external factor using a predictive optimization approach which resulted in a Customer Lifetime Value (LTV) increase of almost 70%.

(more…)

Why B2B Marketers Should Stop A/B Testing

B2B marketers currently face three main challenges with website experimentation as it is currently practiced:

  1.    It does not optimize the KPIs that matter well. – Experimentation does not easily accommodate down-funnel outcomes (revenue pipeline, LTV) or the complexity of B2B traffic and customer journey.
  1.    It is resource-intensive to do right. – Ensuring that you are generating long-term and meaningful business impact from experimentation requires more than just the ability to build and start tests.
  1.    It takes a long time to get results. – Traffic limitations, achieving statistical significance and a linear testing process makes getting results from experimentation a long process.

 I.  KPIs That Matter

The most important outcome to optimize for is revenue.  Ideally, that is the goal we are evaluating experiments against.

In practice, many B2B demand generation marketers are not using revenue as their primary KPI (because it is shared with the sales team), so it is often qualified leads, pipeline opportunities or marketing influenced revenue instead.  In a SaaS business it should be recurring revenue (LTV).

If you cannot measure it, then you cannot optimize it.  Most testing tools were built for B2C and have real problems measuring anything that happens after a lead is created and further down the funnel, off-website or over a longer period of time.

Many companies spend a great deal of resources on optimizing onsite conversions but make too many assumptions about what happens down funnel.  Just because you generate 20% more website form fills does not mean that you are going to see 20% more deals, revenue or LTV.

You can get visibility into down funnel impact through attribution, but in my experience, it tends to be cumbersome and the analysis is done post-hoc (once the experiment is completed), as opposed to being integrated into the testing process.

If you cannot optimize for the KPIs that matter, the effort that the team puts into setting up and managing tests will likely not yield your B2B company true ROI.

II.  Achieving Long-term Impact from Experimentation is Hard and Resource-intensive

At a minimum, to be able to simply launch and interpret basic experiments, a testing team should have skills in UX, front-end development and analytics – and as it turns out, that is not even enough.

Testing platforms have greatly increased access for anyone to start experiments.  However, what most people do not realize is that the majority of ‘winning’ experiments are effectively worthless (80% per Qubit Research) and have no sustainable business impact. The minority that do make an impact tend to be relatively small in magnitude.

It is not uncommon for marketers to string together a series of “winning” experiments (positive, statistically significant change reported by the testing tool) and yet see no long-term impact to the overall conversion rate.  This can happen through testing errors or by simply changing business and traffic conditions.

As a result, companies with mature optimization programs will typically also need to invest heavily in statisticians and data scientists to validate and assess the long-term impact of test results.

Rules-based personalization requires even more resources to manage experimentation across multiple segments.  It is quite tedious for marketers to set up and manage audience definitions and ensure they stay relevant as data sources and traffic conditions change.

We have worked with large B2C sites with over 50 members on their optimization team.  In a high volume transactional site with homogeneous traffic, the investment can be justified.  For the B2B CMO, that is a much harder pill to swallow.

III. Experimentation Takes a Long Time

In addition to being resource intensive, getting B2B results (aka revenue) from website testing takes a long time.

In general, B2B websites have less traffic than their B2C counterparts.  Traffic does have a significant impact on the speed of your testing, however, for our purposes that is not something I am going to dwell on, as it is relatively well travelled ground.

Of course, you do things to increase traffic, but many of us sell B2B products in specific niches that are not going to have the broad reach of a consumer ecommerce site.

What is more interesting, is why we think traffic is important and the impact that has on the time to get results from testing.

You can wait weeks for significance on an onsite goal (which as I have discussed, has questionable value).  The effect that this has on our ability to generate long term outcomes, however, is profound. By nature, A/B testing is a sequential, iterative process, which should be followed deliberately to drive learnings and results.

The consequence of all of this is that you have to wait for tests to be complete and for results to be analyzed and discussed before you have substantive evidence to inform the next hypothesis.  Of course, tests are often run in parallel, but for any given set of hypotheses it is essentially a sequential effort that requires learnings be applied linearly.

 

This inherently linear nature of testing, combined with the time it takes to produce statistically significant results and the low experiment win rate, makes actually getting meaningful results from a B2B testing program a long process.

It is also worth noting that with audience-based personalization you will be dividing traffic across segments and experiments.  This means that you will have even less traffic for each individual experiment and it will take even longer for those experiments to reach significance.

Conclusion

Achieving “10X” improvements in today’s very crowded B2B marketplace requires shifts in approach, process and technology.  Our ability to get closer to customers is going to depend on better experiences that you can deliver to them, which makes the rapid application of validated learnings that much more important.

“Experimentation 1.0” approaches gave human marketers the important ability to test, measure and learn, but the application of these in a B2B context raises some significant obstacles to realizing ROI.

As marketers, we should not settle for secondary indicators of success or delivering subpar experiences.  Optimizing for a download or form fill and just assuming that is going to translate into revenue is not enough anymore.  Understand your complex traffic and customer journey realities to design better experiences that maximize meaningful results, instead of trying to squeeze more out of testing button colors or hero images.

Finally, B2B marketers should no longer wait for B2C oriented experimentation platforms to adopt B2B feature sets.  “Experimentation 2.0” will overcome our human limitations to let us realize radically better results with much lower investment.

New platforms that prioritize relevant data and take advantage of machine learning at scale will alleviate the limitations of A/B testing and rules-based personalization.  Solutions built on these can augment and inform the marketers’ creative ability to engage and convert customers at a scale that manual experimentation cannot approach

By |2025-05-12T04:36:47-07:00January 16th, 2020|A/B Testing, Full-Funnel Optimization|0 Comments

How Not Picking an Experiment Winner Led to a 227% Increase in Revenue

By now most marketers are familiar with the process of experimentation, identify a hypothesis, design a test that splits the population across one or more variants and select a winning variation based on a success metric. This “winner” has a heavy responsibility – we’re assuming that it confers the improvement in revenue and conversion that we measured during the experiment.

Is this always the case? As marketers we’re often told to look at the scientific community as the gold standard for rigorous experimental methodology. But it’s informative to take a look at where even medical testing has come up short.

For years women have been chronically underrepresented in medical trials, which disproportionately favors males in the testing population. This selection bias in medical testing extends back to pre-clinical stages – the majority of drug development research being done on male-only lab animals.

And this testing bias has had real-world consequences. A 2001 report found that 80% of the FDA-approved drugs pulled from the market for “unacceptable health risks” were found to be more harmful to women than to men. In 2013 the FDA announced revised dosing recommendations of the sleep aid Ambien, after finding that women were susceptible to risks resulting from slower metabolism of the medication.

This is a specific example of the problem of external validity in experimentation which poses a risk even if a randomized experiment is conducted appropriately and it’s possible to infer cause and effect conclusions (internal validity.) If the sampled population does not represent the broader population, then those conclusions are likely to be compromised.

Although they’re unlikely to pose a life-or-death scenario, external validity threats are very real risks to marketing experimentation. That triple digit improvement you saw within the test likely won’t produce the expected return when implemented. Ensuring test validity can be a challenging and resource intensive process, fortunately however it’s possible to decouple your return from many of these external threats entirely.

The experiments that you run have to result in better decisions, and ultimately ROI. Further down we’ll look at a situation where an external validity threat in the form of a separate campaign would have invalidated the results of a traditional A/B test. In addition, I’ll show how we were able to adjust and even exploit this external factor using a predictive optimization approach which resulted in a Customer Lifetime Value (LTV) increase of almost 70%.

(more…)

FunnelEnvy Personalization Without Rules

Dictionary.com defines personalization as:
“to design or tailor to meet an individual’s specifications, needs, or preferences:”

Yet, little about personalization as it’s currently touted in mar-tech is actually personalized to an individual’s needs & preferences. Instead of serving the optimized experiences for each individual, customers are merely getting segmented into audiences based on predefined rules.

Putting visitors into segmented audience groups is not personalization.

While serving one of five eBooks based on industry is likely improvement over a static site, it’s far off from personalized.

At FunnelEnvy, we define this approach to optimization as rules-based personalization. Let’s compare this to FunnelEnvy’s personalization without rules.

FunnelEnvy enables experiences that are personalized without rules.

Instead of having pre-defined audiences, FunnelEnvy evaluates each visitor and interaction on a 1:1 basis to determine optimized experiences. Our AI based platform is continuously gathering data and self-improving to ensure every touch is optimized for revenue.

The optimized experience that’s served is often completely different than what a rules-based approach, that is void of any relevant customer context, would serve. The reality is that while marketers can do their best to match an audience to an experience, an AI approach that’s not limited by audiences will almost always outperform a human.

Example to illustrate the between rules-based personalization and FunnelEnvy.

Let’s say you own a Fin-tech company that currently has two eBooks to offer potential customers:

A: ‘10 tips for reducing costs’
B: ‘How to maximize your website for growth’

Now let’s say there are three people that visit your website.

Steve: Works at 10 person chatbot startup
John: Works at 2000 person regional clothing company
Tom: Works at 10000 person multinational shipping company

How Rules-Based Personalization handles the visitors:

With a rules-based approach, you decide to create two predefined audiences. You think that companies under 1,000 employees should see one experience, while those above 1,000 should see another.

With the two segments defined:
Steve would see the eBook “10 tips for reducing costs”
John and Tom would see “How to maximize your website for growth”

After running this experiment you find:
Steve – Bought your product.
Tom – Bought your product.
John – Did Not buy your product.

How FunnelEnvy Personalization handles the visitors:

FunnelEnvy analyzes many traits about the three visitors including:
Evaluating historical data of similar companies to those of Tom, John, and Steve
Looking at Salesforce data to look how other individuals in similar funnel position
Evaluating the Behavior of the three individuals on your website.
Much more…

After analyzing all this available data in real time, FunnelEnvy decides that:
Steve would see the eBook “10 tips for reducing costs”
John and Tom would see “How to maximize your website for growth”

There is no predefined audience based on company size, as the AI determines simply picks the experience that will most likely lead to revenue.

After running this experiment you find:
Steve – Bought your product.
Tom – Bought your product.
John – Bought your product.

As we can see from this example, having a fluid system that can bring in data and evaluate optimized experiences on a 1:1 basis outperforms a predefined and stagnant audience approach. If you are serious about optimizing your website for revenue, consider the advantages of FunnelEnvy’s AI-based approach vs a rules-based one.

By |2025-05-12T04:36:45-07:00April 20th, 2018|Full-Funnel Optimization|0 Comments

Localytics Optimization

A/B testing suffers from a “winner take all” problem whereby it optimizes a single experience across an entire population and does not take context into account.

A better solution for optimizing experiences across the wide variety of visitor context is predictive optimization. It works by using bringing together all of the data about the visitor into a Unified Customer Profile (UCP), continuously optimizing with each impression and using a predictive model to identify the experience that’s most likely to result in onsite and down-funnel outcomes on a 1:1 basis.

Homepage Personalization

The homepage is often one of the most highly trafficked pages, usually with a high volume of direct and organic (branded) search traffic. As a result, it generally has pretty generic top of the funnel content and often serves as a “traffic cop” – funneling visitors to the sections of the site with more specific content.


Hero with no context about customer intent

What if instead of the headline, copy and CTA we could replace it with something that better reflected the visitor’s intent?

Intent: Learn about Localytics for IOS

Intent: Try Discover Platform

Intent: Interested in Localytics CRM

 

Content Personalization

 

Localytics also has an extensive resource collection of case studies, ebooks, whitepapers, and webinars. The featured content in the slider is prime real estate to showcase personalized content.

Intent: Looking for Social Proof

 

Intent: Decision Maker Learning for Media Industry Stats

 

The important part to remember here is we are matching the content to the customer context.

Down-Funnel Personalization

After a visitor is identified as in the target market and has shown some commercial intent, marketers must continue personalizing. In B2B that often means that they’ve returned to the site and engaged with more commercially oriented content, and likely filled out a gated content form. That could also mean that multiple visitors have come to the site from the same account.

We want to continue to provide these visitors with relevant content that continues to engage them, but also give them on-ramps to take the next step.

In Localytics’s case, this “next best action” is either starting the free trial or talking to sales. Since we may also have information about the visitor’s account and role we can incorporate that into the experience and call to action. For example, we may want Marketing Decision Makers to Talk to Sales.

Intent: Ready to Engage with Sales

 

If the visitor is an engaged decision maker we can present them with more specific content and a CTA that takes them directly to a Contact Sales form.

 

Intent: Named account which has expressed interest in joining partner program

 

Localytics has an opportunity to showcase partners based on what they know about the account and the specific opportunity being discussed.

Strong Commercial Intent

Visitors in here have shown strong commercial intent. This goes beyond filling out a form for a piece of content, they’ve demonstrated an interest in engaging in the sales process. Traditionally this is where marketing would have taken a “hands off” approach (it’s a sales problem now!) but that’s no longer sufficient.

For a product like Localytics, the prospect will likely be asking certain questions depending on their role:

  • What support options are available relative to what I need?
  • What have effective implementations at similar companies looked like?
  • How much and what kind of training will our developers require?
  • What professional services or partner resources are available for implementation?
By |2025-05-12T04:36:45-07:00March 14th, 2018|Full-Funnel Optimization|0 Comments

B2B Marketers Should Stop A/B Testing in 2018

 

Several years ago I was hired to help fix some serious website conversion issues for a B2B SaaS client.

A year earlier the client had redesigned their website pricing page and quickly noticed a significant drop in conversions.

They estimated the redesign cost them approximately $100,000 per month.

The pricing page itself was quite standard; there were several graduating plan tiers with some self-service options and a sales contact form for the enterprise plans.

Pricing Page

Pricing Page

 

Before coming to us, they accelerated A/B testing on the page.  After investing a considerable amount in tools and new team members they had several experiments that showed an increase in the tested goals.  Unfortunately, they were not able to find any evidence that these experiments had generated long-term pipeline or revenue impact.

I advised them of our conversion optimization approach – the types of A/B tests that we could run and plan to recapture their lost conversions.  The marketing team asked lots of questions, but the VP of Marketing stayed silent.  He finally turned to me and said:

“I’m on the hook to increase enterprise pipeline and self-sign up customers by 30% this quarter.  What I really want, is a website that speaks to each customer and only shows the unique features, benefits and plans that’s best for them.  How do we do that?”

He wanted an order of magnitude better solution.  He recognized that his company had to get closer to their customers to win and he wanted a web experience that helped them get there.

I did not think they were ready for that.  I told him that we should start with better A/B testing because what he wanted to do would be very complicated, risky and expensive.

Though that felt like the right answer at the time, it is definitely not the right answer today.

Why Website Experimentation Isn’t Enough for B2B

At a time when the web is vital to almost all businesses, rigorous online experiments should be standard operating procedure.

The often cited Harvard Business review quote from the widely distributed article The Surprising Power of Online Experiments supports years of evidence from digital leaders suggesting that high velocity testing is one of the keys to business growth.

The traditional process of website experimentation involves:

  1.    Gathering Evidence. – “Let’s look at the data to see why we’re losing conversions on this page.
  1.    Forming Hypotheses. – “If we moved the plans higher up on the page we would see more conversions because visitors are not scrolling down.”
  1.    Building and Running Experiments. – “Let’s test a version with the plans higher up on the page.”
  1.    Evaluating Results to Inform the Hypothesis. – “Moving the plans up raised sign ups by 5% but didn’t increase enterprise leads.  What if we reworded the benefits of that plan?”

Every conversion optimization practitioner follows some flavor of this methodology.

Typically, within these experiments, traffic is randomly allocated to one or more variations, as well as the control experience.  Tests conclude when there is either a statistically significant change in an onsite conversion goal or the test is deemed inconclusive (which happens frequently).

If you have strong, evidence-based hypotheses and are able to experiment quickly, this will work well enough in some cases.  Over the years we have applied this approach over thousands of experiments and many clients to generate millions of dollars in return.

However, this is not enough for B2B.  Seeking statistically significant outcomes on onsite metrics often means that traditional website experimentation becomes a traffic-based exercise, not necessarily a value-based one.  While it may still be good enough for B2C sites (e.g. retail ecommerce, travel), where traffic and revenue are highly correlated, it falls apart in many B2B scenarios.

The Biggest Challenges with B2B Website Experimentation

B2B marketers currently face three main challenges with website experimentation as it is currently practiced:

  1.    It does not optimize the KPIs that matter well. – Experimentation does not easily accommodate down-funnel outcomes (revenue pipeline, LTV) or the complexity of B2B traffic and customer journey.
  1.    It is resource-intensive to do right. – Ensuring that you are generating long-term and meaningful business impact from experimentation requires more than just the ability to build and start tests.
  1.    It takes a long time to get results. – Traffic limitations, achieving statistical significance and a linear testing process makes getting results from experimentation a long process.

 

I.  KPIs That Matter

The most important outcome to optimize for is revenue.  Ideally, that is the goal we are evaluating experiments against.

In practice, many B2B demand generation marketers are not using revenue as their primary KPI (because it is shared with the sales team), so it is often qualified leads, pipeline opportunities or marketing influenced revenue instead.  In a SaaS business it should be recurring revenue (LTV).

If you cannot measure it, then you cannot optimize it.  Most testing tools were built for B2C and have real problems measuring anything that happens after a lead is created and further down the funnel, off-website or over a longer period of time.

Many companies spend a great deal of resources on optimizing onsite conversions but make too many assumptions about what happens down funnel.  Just because you generate 20% more website form fills does not mean that you are going to see 20% more deals, revenue or LTV.

You can get visibility into down funnel impact through attribution, but in my experience, it tends to be cumbersome and the analysis is done post-hoc (once the experiment is completed), as opposed to being integrated into the testing process.

If you cannot optimize for the KPIs that matter, the effort that the team puts into setting up and managing tests will likely not yield your B2B company true ROI.

Traffic Complexity and Visitor Context

Unlike most B2C, B2B websites have to contend with all sorts of different visitors across multiple dimensions and often with a long and varied customer journey.  This customer differentiation results in significantly different motivations, expectations and approaches.  Small business end-users might expect a free trial and low priced plan.  Enterprise customers often want security and support and expect to speak to sales.  Existing customers or free trial users want to know why they should upgrade or purchase a complementary product.

An added source of complexity (especially if you are targeting enterprise), is the need to market and deliver experiences to both accounts and individuals.  With over 6 decision makers involved in an enterprise deal, you must be able to speak to both the motivations of the persona/role as well as their account.

One of the easiest ways to come face-to-face with these challenges is to look at the common SaaS pricing page.

Despite my assertions several years ago, the benefits of A/B testing are going to be limited here.  You can change the names or colors of the plans or move them up the page, but ultimately, you are going to be stuck optimizing at the margin – testing hypotheses with low potential impact.

As the VP of Marketing wanted to do with us years ago, we would be better off showing the best plan, benefits and next steps to individual visitors based on their role, company and prior history.  That requires optimization based on visitor context, commonly known as website personalization.

Rules-based Website Personalization

The current standard for personalization is “rules-based” – marketers define fixed criteria (rules) for audiences and create targeted experiences for these them.  B2B audiences are often account, or individual based, such as target industries, accounts, existing customers or job functions.

Unfortunately, website personalization suffers from a lack of adoption and success in the B2B market.  67% of B2B marketers do not use website personalization technology, and only 21% of those that do are satisfied with results (vs 53% for B2C).

Looking at websites that have a major marketing automation platform and reasonably high traffic, you can see the discrepancy between those using commercial A/B testing vs Personalization:

The much higher percentage of sites that use A/B testing vs personalization, suggests that although the value of experimentation is relatively well understood, marketers have not been able to see the same value from personalization.

What accounts for this?  

Marketers who support experimentation subscribe to the idea of gathering evidence to establish causality between website experiences and business improvement.  Unfortunately, rules-based personalization makes the resource-investment and time to value challenges involved with doing this even harder.

II.  Achieving Long-term Impact from Experimentation is Hard and Resource-intensive

At a minimum, to be able to simply launch and interpret basic experiments, a testing team should have skills in UX, front-end development and analytics – and as it turns out, that is not even enough.

Testing platforms have greatly increased access for anyone to start experiments.  However, what most people do not realize is that the majority of ‘winning’ experiments are effectively worthless (80% per Qubit Research) and have no sustainable business impact. The minority that do make an impact tend to be relatively small in magnitude.

It is not uncommon for marketers to string together a series of “winning” experiments (positive, statistically significant change reported by the testing tool) and yet see no long-term impact to the overall conversion rate.  This can happen through testing errors or by simply changing business and traffic conditions.

As a result, companies with mature optimization programs will typically also need to invest heavily in statisticians and data scientists to validate and assess the long-term impact of test results.

Rules-based personalization requires even more resources to manage experimentation across multiple segments.  It is quite tedious for marketers to set up and manage audience definitions and ensure they stay relevant as data sources and traffic conditions change.

We have worked with large B2C sites with over 50 members on their optimization team.  In a high volume transactional site with homogeneous traffic, the investment can be justified.  For the B2B CMO, that is a much harder pill to swallow.

III. Experimentation Takes a Long Time

In addition to being resource intensive, getting B2B results (aka revenue) from website testing takes a long time.

In general, B2B websites have less traffic than their B2C counterparts.  Traffic does have a significant impact on the speed of your testing, however, for our purposes that is not something I am going to dwell on, as it is relatively well travelled ground.

Of course, you do things to increase traffic, but many of us sell B2B products in specific niches that are not going to have the broad reach of a consumer ecommerce site.

What is more interesting, is why we think traffic is important and the impact that has on the time to get results from testing.

You can wait weeks for significance on an onsite goal (which as I have discussed, has questionable value).  The effect that this has on our ability to generate long term outcomes, however, is profound.  By nature, A/B testing is a sequential, iterative process, which should be followed deliberately to drive learnings and results.

The consequence of all of this is that you have to wait for tests to be complete and for results to be analyzed and discussed before you have substantive evidence to inform the next hypothesis.  Of course, tests are often run in parallel, but for any given set of hypotheses it is essentially a sequential effort that requires learnings be applied linearly.

 

This inherently linear nature of testing, combined with the time it takes to produce statistically significant results and the low experiment win rate, makes actually getting meaningful results from a B2B testing program a long process.

It is also worth noting that with audience-based personalization you will be dividing traffic across segments and experiments.  This means that you will have even less traffic for each individual experiment and it will take even longer for those experiments to reach significance.

Is there Better Way to Improve B2B Website Conversions?

The short answer?  Yes.

At FunnelEnvy, we believe that with context about the visitor and an understanding of prior outcomes, we can make better decisions than with the randomized testing that websites are using today.  We can use algorithms that are learning and improving every decision tree, continuously, to achieve better results with less manual effort from our clients.

Our “experimentation 2.0” solution leverages a real-time prediction model.  Predictive models use the past to predict future outcomes based on available signals.  If you have ever used predictive lead scoring or been on a travel site and seen “there is an 80% chance this fare will increase in the next 7 days,” then you have seen prediction models in action.

In this case, what we are predicting, is the best website visitor experience that will lead to an optimal outcome.  Rather than testing populations in aggregate, we are making experience predictions on a 1:1 basis based on all of the available context and historical outcomes.  Our variation scores take into account expected conversion value as well as conversion probability, and we continuously learn from actual outcomes to improve our next predictions.

Ultimately, the quality of these predictions is based on the quality of the signals that we provide the model and the outcomes that we are tracking.  By bringing together behavioral, 1st party and 3rd party data we are building a Unified Customer Profile (UCP) for each visitor and letting the algorithm determine which attributes are relevant signals.  To ensure that our predictive model is optimizing for the most important outcomes, we incorporate Full Funnel Goal Tracking for individual (MQL, SQL) and account (opportunities, revenue, LTV) outcomes.

Example:  Box’s Homepage Experience

To see what a predictive optimization approach can do, let’s look at a hypothetical example:

Box.com has an above the fold Call to Action (CTA) that takes you to their pricing page.  This is a sensible approach when you do not have a lot of context about the visitor because from the pricing page, you can navigate to the right plan and option that is most relevant.

Of course, they are putting a lot of burden on the visitor to make a decision.  There are a total of 9 plans and 11 CTAs on that pricing page alone, and not every visitor is ready to select one – many still need to be educated on the solution.  We could almost certainly increase conversions if we made that above the fold experience more relevant to a visitor’s motivations.

SMB visitors might be ready to start the free trial once they have seen the demo, the enterprise infosec team might be interested in learning about Box’s security features first, customers who are not ready to speak to sales or sign up might benefit from the online demo, and decision makers at enterprise accounts and who are engaged might be ready to fill out the sales form.

Modifying the homepage sub-headline and CTA to accommodate these experiences could look something like the image below.  Note that they take you down completely different visitor journeys, something you would never do with a traditional A/B test.

If we had context about the visitor and historical data we could predict the highest probability experience that would lead to both onsite conversion as well as down funnel success.  The prediction would be made on a 1:1 basis as the model determines which attributes are relevant signals.

Finally, because we are automating the learning and prediction model, this would be no more difficult than adding variations to an A/B test, and far simpler and with higher precision than rules-based personalization.  The team would be alleviated from having to do the analytical heavy lifting, new variations could be added over time and changing conditions would automatically be incorporated into the model.

Conclusion

Achieving “10X” improvements in today’s very crowded B2B marketplace requires shifts in approach, process and technology.  Our ability to get closer to customers is going to depend on better experiences that you can deliver to them, which makes the rapid application of validated learnings that much more important.

“Experimentation 1.0” approaches gave human marketers the important ability to test, measure and learn, but the application of these in a B2B context raises some significant obstacles to realizing ROI.

As marketers, we should not settle for secondary indicators of success or delivering subpar experiences.  Optimizing for a download or form fill and just assuming that is going to translate into revenue is not enough anymore.  Understand your complex traffic and customer journey realities to design better experiences that maximize meaningful results, instead of trying to squeeze more out of testing button colors or hero images.

Finally, B2B marketers should no longer wait for B2C oriented experimentation platforms to adopt B2B feature sets.  “Experimentation 2.0” will overcome our human limitations to let us realize radically better results with much lower investment.

New platforms that prioritize relevant data and take advantage of machine learning at scale will alleviate the limitations of A/B testing and rules-based personalization.  Solutions built on these can augment and inform the marketers’ creative ability to engage and convert customers at a scale that manual experimentation cannot approach.

This post originally appeared on LinkedIn.

By |2025-05-12T04:36:43-07:00January 19th, 2018|Full-Funnel Optimization, A/B Testing|0 Comments

The B2B Marketers Guide to Thinking Beyond A/B Testing

Deliver more conversions with less effort with ‘The B2B Marketers Guide to Thinking Beyond A/B Testing’.

Download the guide and be inspired by optimization strategies that blow A/B testing away.

If you’re striving for experimentation excellence, there’s no better teacher than learning from the best of the best. Whether you are looking for ways to increase on-site conversions, marketing attributable revenue, or testing velocity, the B2B Marketers guide to thinking beyond A/B testing will equip you to succeed.

Download this ebook to discover:

  • Why traditional A/B testing is not effective for B2B, and what you can do about it.
  • How to optimize for on-site and down funnel (pipeline & revenue) outcomes.
  • How to re-think your on-site optimization to drive more predictable revenue.

Get The Guide Below

Share a few contact details and we’ll send a download link to your inbox.

var policy = document.querySelector(‘.fe_form_blueprint ._html-code a’);
document.querySelector(‘.fe_form_blueprint ._checkbox-radio span label’).insertAdjacentElement(‘afterend’ , policy);

Trusted By

Trusted By

By |2025-05-12T04:36:43-07:00December 21st, 2017|A/B Testing, Full-Funnel Optimization|Comments Off on The B2B Marketers Guide to Thinking Beyond A/B Testing
Go to Top