The Importance of Context with Marketing Experiments

By now most marketers are familiar with the process of experimentation, identify a hypothesis, design a test that splits the population across one or more variants and select a winning variation based on a success metric. This “winner” has a heavy responsibility – we’re assuming that it confers the improvement in revenue and conversion that we measured during the experiment.

The experiments that you run have to result in better decisions, and ultimately ROI. Further down we’ll look at a situations where an external validity threat in the form of a separate campaign would have invalidated the results of a traditional A/B test. In addition, I’ll show how we were able to adjust and even exploit this external factor using a predictive optimization approach which resulted in a Customer Lifetime Value (LTV) increase of almost 70%.

(more…)

Why B2B Marketers Should Stop A/B Testing

B2B marketers currently face three main challenges with website experimentation as it is currently practiced:

  1.    It does not optimize the KPIs that matter well. – Experimentation does not easily accommodate down-funnel outcomes (revenue pipeline, LTV) or the complexity of B2B traffic and customer journey.
  1.    It is resource-intensive to do right. – Ensuring that you are generating long-term and meaningful business impact from experimentation requires more than just the ability to build and start tests.
  1.    It takes a long time to get results. – Traffic limitations, achieving statistical significance and a linear testing process makes getting results from experimentation a long process.

 I.  KPIs That Matter

The most important outcome to optimize for is revenue.  Ideally, that is the goal we are evaluating experiments against.

In practice, many B2B demand generation marketers are not using revenue as their primary KPI (because it is shared with the sales team), so it is often qualified leads, pipeline opportunities or marketing influenced revenue instead.  In a SaaS business it should be recurring revenue (LTV).

If you cannot measure it, then you cannot optimize it.  Most testing tools were built for B2C and have real problems measuring anything that happens after a lead is created and further down the funnel, off-website or over a longer period of time.

Many companies spend a great deal of resources on optimizing onsite conversions but make too many assumptions about what happens down funnel.  Just because you generate 20% more website form fills does not mean that you are going to see 20% more deals, revenue or LTV.

You can get visibility into down funnel impact through attribution, but in my experience, it tends to be cumbersome and the analysis is done post-hoc (once the experiment is completed), as opposed to being integrated into the testing process.

If you cannot optimize for the KPIs that matter, the effort that the team puts into setting up and managing tests will likely not yield your B2B company true ROI.

II.  Achieving Long-term Impact from Experimentation is Hard and Resource-intensive

At a minimum, to be able to simply launch and interpret basic experiments, a testing team should have skills in UX, front-end development and analytics – and as it turns out, that is not even enough.

Testing platforms have greatly increased access for anyone to start experiments.  However, what most people do not realize is that the majority of ‘winning’ experiments are effectively worthless (80% per Qubit Research) and have no sustainable business impact. The minority that do make an impact tend to be relatively small in magnitude.

It is not uncommon for marketers to string together a series of “winning” experiments (positive, statistically significant change reported by the testing tool) and yet see no long-term impact to the overall conversion rate.  This can happen through testing errors or by simply changing business and traffic conditions.

As a result, companies with mature optimization programs will typically also need to invest heavily in statisticians and data scientists to validate and assess the long-term impact of test results.

Rules-based personalization requires even more resources to manage experimentation across multiple segments.  It is quite tedious for marketers to set up and manage audience definitions and ensure they stay relevant as data sources and traffic conditions change.

We have worked with large B2C sites with over 50 members on their optimization team.  In a high volume transactional site with homogeneous traffic, the investment can be justified.  For the B2B CMO, that is a much harder pill to swallow.

III. Experimentation Takes a Long Time

In addition to being resource intensive, getting B2B results (aka revenue) from website testing takes a long time.

In general, B2B websites have less traffic than their B2C counterparts.  Traffic does have a significant impact on the speed of your testing, however, for our purposes that is not something I am going to dwell on, as it is relatively well travelled ground.

Of course, you do things to increase traffic, but many of us sell B2B products in specific niches that are not going to have the broad reach of a consumer ecommerce site.

What is more interesting, is why we think traffic is important and the impact that has on the time to get results from testing.

You can wait weeks for significance on an onsite goal (which as I have discussed, has questionable value).  The effect that this has on our ability to generate long term outcomes, however, is profound. By nature, A/B testing is a sequential, iterative process, which should be followed deliberately to drive learnings and results.

The consequence of all of this is that you have to wait for tests to be complete and for results to be analyzed and discussed before you have substantive evidence to inform the next hypothesis.  Of course, tests are often run in parallel, but for any given set of hypotheses it is essentially a sequential effort that requires learnings be applied linearly.

 

This inherently linear nature of testing, combined with the time it takes to produce statistically significant results and the low experiment win rate, makes actually getting meaningful results from a B2B testing program a long process.

It is also worth noting that with audience-based personalization you will be dividing traffic across segments and experiments.  This means that you will have even less traffic for each individual experiment and it will take even longer for those experiments to reach significance.

Conclusion

Achieving “10X” improvements in today’s very crowded B2B marketplace requires shifts in approach, process and technology.  Our ability to get closer to customers is going to depend on better experiences that you can deliver to them, which makes the rapid application of validated learnings that much more important.

“Experimentation 1.0” approaches gave human marketers the important ability to test, measure and learn, but the application of these in a B2B context raises some significant obstacles to realizing ROI.

As marketers, we should not settle for secondary indicators of success or delivering subpar experiences.  Optimizing for a download or form fill and just assuming that is going to translate into revenue is not enough anymore.  Understand your complex traffic and customer journey realities to design better experiences that maximize meaningful results, instead of trying to squeeze more out of testing button colors or hero images.

Finally, B2B marketers should no longer wait for B2C oriented experimentation platforms to adopt B2B feature sets.  “Experimentation 2.0” will overcome our human limitations to let us realize radically better results with much lower investment.

New platforms that prioritize relevant data and take advantage of machine learning at scale will alleviate the limitations of A/B testing and rules-based personalization.  Solutions built on these can augment and inform the marketers’ creative ability to engage and convert customers at a scale that manual experimentation cannot approach

By |2025-05-12T04:36:47-07:00January 16th, 2020|A/B Testing, Full-Funnel Optimization|0 Comments

How Not Picking an Experiment Winner Led to a 227% Increase in Revenue

By now most marketers are familiar with the process of experimentation, identify a hypothesis, design a test that splits the population across one or more variants and select a winning variation based on a success metric. This “winner” has a heavy responsibility – we’re assuming that it confers the improvement in revenue and conversion that we measured during the experiment.

Is this always the case? As marketers we’re often told to look at the scientific community as the gold standard for rigorous experimental methodology. But it’s informative to take a look at where even medical testing has come up short.

For years women have been chronically underrepresented in medical trials, which disproportionately favors males in the testing population. This selection bias in medical testing extends back to pre-clinical stages – the majority of drug development research being done on male-only lab animals.

And this testing bias has had real-world consequences. A 2001 report found that 80% of the FDA-approved drugs pulled from the market for “unacceptable health risks” were found to be more harmful to women than to men. In 2013 the FDA announced revised dosing recommendations of the sleep aid Ambien, after finding that women were susceptible to risks resulting from slower metabolism of the medication.

This is a specific example of the problem of external validity in experimentation which poses a risk even if a randomized experiment is conducted appropriately and it’s possible to infer cause and effect conclusions (internal validity.) If the sampled population does not represent the broader population, then those conclusions are likely to be compromised.

Although they’re unlikely to pose a life-or-death scenario, external validity threats are very real risks to marketing experimentation. That triple digit improvement you saw within the test likely won’t produce the expected return when implemented. Ensuring test validity can be a challenging and resource intensive process, fortunately however it’s possible to decouple your return from many of these external threats entirely.

The experiments that you run have to result in better decisions, and ultimately ROI. Further down we’ll look at a situation where an external validity threat in the form of a separate campaign would have invalidated the results of a traditional A/B test. In addition, I’ll show how we were able to adjust and even exploit this external factor using a predictive optimization approach which resulted in a Customer Lifetime Value (LTV) increase of almost 70%.

(more…)

FunnelEnvy Personalization Without Rules

Dictionary.com defines personalization as:
“to design or tailor to meet an individual’s specifications, needs, or preferences:”

Yet, little about personalization as it’s currently touted in mar-tech is actually personalized to an individual’s needs & preferences. Instead of serving the optimized experiences for each individual, customers are merely getting segmented into audiences based on predefined rules.

Putting visitors into segmented audience groups is not personalization.

While serving one of five eBooks based on industry is likely improvement over a static site, it’s far off from personalized.

At FunnelEnvy, we define this approach to optimization as rules-based personalization. Let’s compare this to FunnelEnvy’s personalization without rules.

FunnelEnvy enables experiences that are personalized without rules.

Instead of having pre-defined audiences, FunnelEnvy evaluates each visitor and interaction on a 1:1 basis to determine optimized experiences. Our AI based platform is continuously gathering data and self-improving to ensure every touch is optimized for revenue.

The optimized experience that’s served is often completely different than what a rules-based approach, that is void of any relevant customer context, would serve. The reality is that while marketers can do their best to match an audience to an experience, an AI approach that’s not limited by audiences will almost always outperform a human.

Example to illustrate the between rules-based personalization and FunnelEnvy.

Let’s say you own a Fin-tech company that currently has two eBooks to offer potential customers:

A: ‘10 tips for reducing costs’
B: ‘How to maximize your website for growth’

Now let’s say there are three people that visit your website.

Steve: Works at 10 person chatbot startup
John: Works at 2000 person regional clothing company
Tom: Works at 10000 person multinational shipping company

How Rules-Based Personalization handles the visitors:

With a rules-based approach, you decide to create two predefined audiences. You think that companies under 1,000 employees should see one experience, while those above 1,000 should see another.

With the two segments defined:
Steve would see the eBook “10 tips for reducing costs”
John and Tom would see “How to maximize your website for growth”

After running this experiment you find:
Steve – Bought your product.
Tom – Bought your product.
John – Did Not buy your product.

How FunnelEnvy Personalization handles the visitors:

FunnelEnvy analyzes many traits about the three visitors including:
Evaluating historical data of similar companies to those of Tom, John, and Steve
Looking at Salesforce data to look how other individuals in similar funnel position
Evaluating the Behavior of the three individuals on your website.
Much more…

After analyzing all this available data in real time, FunnelEnvy decides that:
Steve would see the eBook “10 tips for reducing costs”
John and Tom would see “How to maximize your website for growth”

There is no predefined audience based on company size, as the AI determines simply picks the experience that will most likely lead to revenue.

After running this experiment you find:
Steve – Bought your product.
Tom – Bought your product.
John – Bought your product.

As we can see from this example, having a fluid system that can bring in data and evaluate optimized experiences on a 1:1 basis outperforms a predefined and stagnant audience approach. If you are serious about optimizing your website for revenue, consider the advantages of FunnelEnvy’s AI-based approach vs a rules-based one.

By |2025-05-12T04:36:45-07:00April 20th, 2018|Full-Funnel Optimization|0 Comments

Localytics Optimization

A/B testing suffers from a “winner take all” problem whereby it optimizes a single experience across an entire population and does not take context into account.

A better solution for optimizing experiences across the wide variety of visitor context is predictive optimization. It works by using bringing together all of the data about the visitor into a Unified Customer Profile (UCP), continuously optimizing with each impression and using a predictive model to identify the experience that’s most likely to result in onsite and down-funnel outcomes on a 1:1 basis.

Homepage Personalization

The homepage is often one of the most highly trafficked pages, usually with a high volume of direct and organic (branded) search traffic. As a result, it generally has pretty generic top of the funnel content and often serves as a “traffic cop” – funneling visitors to the sections of the site with more specific content.


Hero with no context about customer intent

What if instead of the headline, copy and CTA we could replace it with something that better reflected the visitor’s intent?

Intent: Learn about Localytics for IOS

Intent: Try Discover Platform

Intent: Interested in Localytics CRM

 

Content Personalization

 

Localytics also has an extensive resource collection of case studies, ebooks, whitepapers, and webinars. The featured content in the slider is prime real estate to showcase personalized content.

Intent: Looking for Social Proof

 

Intent: Decision Maker Learning for Media Industry Stats

 

The important part to remember here is we are matching the content to the customer context.

Down-Funnel Personalization

After a visitor is identified as in the target market and has shown some commercial intent, marketers must continue personalizing. In B2B that often means that they’ve returned to the site and engaged with more commercially oriented content, and likely filled out a gated content form. That could also mean that multiple visitors have come to the site from the same account.

We want to continue to provide these visitors with relevant content that continues to engage them, but also give them on-ramps to take the next step.

In Localytics’s case, this “next best action” is either starting the free trial or talking to sales. Since we may also have information about the visitor’s account and role we can incorporate that into the experience and call to action. For example, we may want Marketing Decision Makers to Talk to Sales.

Intent: Ready to Engage with Sales

 

If the visitor is an engaged decision maker we can present them with more specific content and a CTA that takes them directly to a Contact Sales form.

 

Intent: Named account which has expressed interest in joining partner program

 

Localytics has an opportunity to showcase partners based on what they know about the account and the specific opportunity being discussed.

Strong Commercial Intent

Visitors in here have shown strong commercial intent. This goes beyond filling out a form for a piece of content, they’ve demonstrated an interest in engaging in the sales process. Traditionally this is where marketing would have taken a “hands off” approach (it’s a sales problem now!) but that’s no longer sufficient.

For a product like Localytics, the prospect will likely be asking certain questions depending on their role:

  • What support options are available relative to what I need?
  • What have effective implementations at similar companies looked like?
  • How much and what kind of training will our developers require?
  • What professional services or partner resources are available for implementation?
By |2025-05-12T04:36:45-07:00March 14th, 2018|Full-Funnel Optimization|0 Comments

B2B Marketers Should Stop A/B Testing in 2018

 

Several years ago I was hired to help fix some serious website conversion issues for a B2B SaaS client.

A year earlier the client had redesigned their website pricing page and quickly noticed a significant drop in conversions.

They estimated the redesign cost them approximately $100,000 per month.

The pricing page itself was quite standard; there were several graduating plan tiers with some self-service options and a sales contact form for the enterprise plans.

Pricing Page

Pricing Page

 

Before coming to us, they accelerated A/B testing on the page.  After investing a considerable amount in tools and new team members they had several experiments that showed an increase in the tested goals.  Unfortunately, they were not able to find any evidence that these experiments had generated long-term pipeline or revenue impact.

I advised them of our conversion optimization approach – the types of A/B tests that we could run and plan to recapture their lost conversions.  The marketing team asked lots of questions, but the VP of Marketing stayed silent.  He finally turned to me and said:

“I’m on the hook to increase enterprise pipeline and self-sign up customers by 30% this quarter.  What I really want, is a website that speaks to each customer and only shows the unique features, benefits and plans that’s best for them.  How do we do that?”

He wanted an order of magnitude better solution.  He recognized that his company had to get closer to their customers to win and he wanted a web experience that helped them get there.

I did not think they were ready for that.  I told him that we should start with better A/B testing because what he wanted to do would be very complicated, risky and expensive.

Though that felt like the right answer at the time, it is definitely not the right answer today.

Why Website Experimentation Isn’t Enough for B2B

At a time when the web is vital to almost all businesses, rigorous online experiments should be standard operating procedure.

The often cited Harvard Business review quote from the widely distributed article The Surprising Power of Online Experiments supports years of evidence from digital leaders suggesting that high velocity testing is one of the keys to business growth.

The traditional process of website experimentation involves:

  1.    Gathering Evidence. – “Let’s look at the data to see why we’re losing conversions on this page.
  1.    Forming Hypotheses. – “If we moved the plans higher up on the page we would see more conversions because visitors are not scrolling down.”
  1.    Building and Running Experiments. – “Let’s test a version with the plans higher up on the page.”
  1.    Evaluating Results to Inform the Hypothesis. – “Moving the plans up raised sign ups by 5% but didn’t increase enterprise leads.  What if we reworded the benefits of that plan?”

Every conversion optimization practitioner follows some flavor of this methodology.

Typically, within these experiments, traffic is randomly allocated to one or more variations, as well as the control experience.  Tests conclude when there is either a statistically significant change in an onsite conversion goal or the test is deemed inconclusive (which happens frequently).

If you have strong, evidence-based hypotheses and are able to experiment quickly, this will work well enough in some cases.  Over the years we have applied this approach over thousands of experiments and many clients to generate millions of dollars in return.

However, this is not enough for B2B.  Seeking statistically significant outcomes on onsite metrics often means that traditional website experimentation becomes a traffic-based exercise, not necessarily a value-based one.  While it may still be good enough for B2C sites (e.g. retail ecommerce, travel), where traffic and revenue are highly correlated, it falls apart in many B2B scenarios.

The Biggest Challenges with B2B Website Experimentation

B2B marketers currently face three main challenges with website experimentation as it is currently practiced:

  1.    It does not optimize the KPIs that matter well. – Experimentation does not easily accommodate down-funnel outcomes (revenue pipeline, LTV) or the complexity of B2B traffic and customer journey.
  1.    It is resource-intensive to do right. – Ensuring that you are generating long-term and meaningful business impact from experimentation requires more than just the ability to build and start tests.
  1.    It takes a long time to get results. – Traffic limitations, achieving statistical significance and a linear testing process makes getting results from experimentation a long process.

 

I.  KPIs That Matter

The most important outcome to optimize for is revenue.  Ideally, that is the goal we are evaluating experiments against.

In practice, many B2B demand generation marketers are not using revenue as their primary KPI (because it is shared with the sales team), so it is often qualified leads, pipeline opportunities or marketing influenced revenue instead.  In a SaaS business it should be recurring revenue (LTV).

If you cannot measure it, then you cannot optimize it.  Most testing tools were built for B2C and have real problems measuring anything that happens after a lead is created and further down the funnel, off-website or over a longer period of time.

Many companies spend a great deal of resources on optimizing onsite conversions but make too many assumptions about what happens down funnel.  Just because you generate 20% more website form fills does not mean that you are going to see 20% more deals, revenue or LTV.

You can get visibility into down funnel impact through attribution, but in my experience, it tends to be cumbersome and the analysis is done post-hoc (once the experiment is completed), as opposed to being integrated into the testing process.

If you cannot optimize for the KPIs that matter, the effort that the team puts into setting up and managing tests will likely not yield your B2B company true ROI.

Traffic Complexity and Visitor Context

Unlike most B2C, B2B websites have to contend with all sorts of different visitors across multiple dimensions and often with a long and varied customer journey.  This customer differentiation results in significantly different motivations, expectations and approaches.  Small business end-users might expect a free trial and low priced plan.  Enterprise customers often want security and support and expect to speak to sales.  Existing customers or free trial users want to know why they should upgrade or purchase a complementary product.

An added source of complexity (especially if you are targeting enterprise), is the need to market and deliver experiences to both accounts and individuals.  With over 6 decision makers involved in an enterprise deal, you must be able to speak to both the motivations of the persona/role as well as their account.

One of the easiest ways to come face-to-face with these challenges is to look at the common SaaS pricing page.

Despite my assertions several years ago, the benefits of A/B testing are going to be limited here.  You can change the names or colors of the plans or move them up the page, but ultimately, you are going to be stuck optimizing at the margin – testing hypotheses with low potential impact.

As the VP of Marketing wanted to do with us years ago, we would be better off showing the best plan, benefits and next steps to individual visitors based on their role, company and prior history.  That requires optimization based on visitor context, commonly known as website personalization.

Rules-based Website Personalization

The current standard for personalization is “rules-based” – marketers define fixed criteria (rules) for audiences and create targeted experiences for these them.  B2B audiences are often account, or individual based, such as target industries, accounts, existing customers or job functions.

Unfortunately, website personalization suffers from a lack of adoption and success in the B2B market.  67% of B2B marketers do not use website personalization technology, and only 21% of those that do are satisfied with results (vs 53% for B2C).

Looking at websites that have a major marketing automation platform and reasonably high traffic, you can see the discrepancy between those using commercial A/B testing vs Personalization:

The much higher percentage of sites that use A/B testing vs personalization, suggests that although the value of experimentation is relatively well understood, marketers have not been able to see the same value from personalization.

What accounts for this?  

Marketers who support experimentation subscribe to the idea of gathering evidence to establish causality between website experiences and business improvement.  Unfortunately, rules-based personalization makes the resource-investment and time to value challenges involved with doing this even harder.

II.  Achieving Long-term Impact from Experimentation is Hard and Resource-intensive

At a minimum, to be able to simply launch and interpret basic experiments, a testing team should have skills in UX, front-end development and analytics – and as it turns out, that is not even enough.

Testing platforms have greatly increased access for anyone to start experiments.  However, what most people do not realize is that the majority of ‘winning’ experiments are effectively worthless (80% per Qubit Research) and have no sustainable business impact. The minority that do make an impact tend to be relatively small in magnitude.

It is not uncommon for marketers to string together a series of “winning” experiments (positive, statistically significant change reported by the testing tool) and yet see no long-term impact to the overall conversion rate.  This can happen through testing errors or by simply changing business and traffic conditions.

As a result, companies with mature optimization programs will typically also need to invest heavily in statisticians and data scientists to validate and assess the long-term impact of test results.

Rules-based personalization requires even more resources to manage experimentation across multiple segments.  It is quite tedious for marketers to set up and manage audience definitions and ensure they stay relevant as data sources and traffic conditions change.

We have worked with large B2C sites with over 50 members on their optimization team.  In a high volume transactional site with homogeneous traffic, the investment can be justified.  For the B2B CMO, that is a much harder pill to swallow.

III. Experimentation Takes a Long Time

In addition to being resource intensive, getting B2B results (aka revenue) from website testing takes a long time.

In general, B2B websites have less traffic than their B2C counterparts.  Traffic does have a significant impact on the speed of your testing, however, for our purposes that is not something I am going to dwell on, as it is relatively well travelled ground.

Of course, you do things to increase traffic, but many of us sell B2B products in specific niches that are not going to have the broad reach of a consumer ecommerce site.

What is more interesting, is why we think traffic is important and the impact that has on the time to get results from testing.

You can wait weeks for significance on an onsite goal (which as I have discussed, has questionable value).  The effect that this has on our ability to generate long term outcomes, however, is profound.  By nature, A/B testing is a sequential, iterative process, which should be followed deliberately to drive learnings and results.

The consequence of all of this is that you have to wait for tests to be complete and for results to be analyzed and discussed before you have substantive evidence to inform the next hypothesis.  Of course, tests are often run in parallel, but for any given set of hypotheses it is essentially a sequential effort that requires learnings be applied linearly.

 

This inherently linear nature of testing, combined with the time it takes to produce statistically significant results and the low experiment win rate, makes actually getting meaningful results from a B2B testing program a long process.

It is also worth noting that with audience-based personalization you will be dividing traffic across segments and experiments.  This means that you will have even less traffic for each individual experiment and it will take even longer for those experiments to reach significance.

Is there Better Way to Improve B2B Website Conversions?

The short answer?  Yes.

At FunnelEnvy, we believe that with context about the visitor and an understanding of prior outcomes, we can make better decisions than with the randomized testing that websites are using today.  We can use algorithms that are learning and improving every decision tree, continuously, to achieve better results with less manual effort from our clients.

Our “experimentation 2.0” solution leverages a real-time prediction model.  Predictive models use the past to predict future outcomes based on available signals.  If you have ever used predictive lead scoring or been on a travel site and seen “there is an 80% chance this fare will increase in the next 7 days,” then you have seen prediction models in action.

In this case, what we are predicting, is the best website visitor experience that will lead to an optimal outcome.  Rather than testing populations in aggregate, we are making experience predictions on a 1:1 basis based on all of the available context and historical outcomes.  Our variation scores take into account expected conversion value as well as conversion probability, and we continuously learn from actual outcomes to improve our next predictions.

Ultimately, the quality of these predictions is based on the quality of the signals that we provide the model and the outcomes that we are tracking.  By bringing together behavioral, 1st party and 3rd party data we are building a Unified Customer Profile (UCP) for each visitor and letting the algorithm determine which attributes are relevant signals.  To ensure that our predictive model is optimizing for the most important outcomes, we incorporate Full Funnel Goal Tracking for individual (MQL, SQL) and account (opportunities, revenue, LTV) outcomes.

Example:  Box’s Homepage Experience

To see what a predictive optimization approach can do, let’s look at a hypothetical example:

Box.com has an above the fold Call to Action (CTA) that takes you to their pricing page.  This is a sensible approach when you do not have a lot of context about the visitor because from the pricing page, you can navigate to the right plan and option that is most relevant.

Of course, they are putting a lot of burden on the visitor to make a decision.  There are a total of 9 plans and 11 CTAs on that pricing page alone, and not every visitor is ready to select one – many still need to be educated on the solution.  We could almost certainly increase conversions if we made that above the fold experience more relevant to a visitor’s motivations.

SMB visitors might be ready to start the free trial once they have seen the demo, the enterprise infosec team might be interested in learning about Box’s security features first, customers who are not ready to speak to sales or sign up might benefit from the online demo, and decision makers at enterprise accounts and who are engaged might be ready to fill out the sales form.

Modifying the homepage sub-headline and CTA to accommodate these experiences could look something like the image below.  Note that they take you down completely different visitor journeys, something you would never do with a traditional A/B test.

If we had context about the visitor and historical data we could predict the highest probability experience that would lead to both onsite conversion as well as down funnel success.  The prediction would be made on a 1:1 basis as the model determines which attributes are relevant signals.

Finally, because we are automating the learning and prediction model, this would be no more difficult than adding variations to an A/B test, and far simpler and with higher precision than rules-based personalization.  The team would be alleviated from having to do the analytical heavy lifting, new variations could be added over time and changing conditions would automatically be incorporated into the model.

Conclusion

Achieving “10X” improvements in today’s very crowded B2B marketplace requires shifts in approach, process and technology.  Our ability to get closer to customers is going to depend on better experiences that you can deliver to them, which makes the rapid application of validated learnings that much more important.

“Experimentation 1.0” approaches gave human marketers the important ability to test, measure and learn, but the application of these in a B2B context raises some significant obstacles to realizing ROI.

As marketers, we should not settle for secondary indicators of success or delivering subpar experiences.  Optimizing for a download or form fill and just assuming that is going to translate into revenue is not enough anymore.  Understand your complex traffic and customer journey realities to design better experiences that maximize meaningful results, instead of trying to squeeze more out of testing button colors or hero images.

Finally, B2B marketers should no longer wait for B2C oriented experimentation platforms to adopt B2B feature sets.  “Experimentation 2.0” will overcome our human limitations to let us realize radically better results with much lower investment.

New platforms that prioritize relevant data and take advantage of machine learning at scale will alleviate the limitations of A/B testing and rules-based personalization.  Solutions built on these can augment and inform the marketers’ creative ability to engage and convert customers at a scale that manual experimentation cannot approach.

This post originally appeared on LinkedIn.

By |2025-05-12T04:36:43-07:00January 19th, 2018|A/B Testing, Full-Funnel Optimization|0 Comments

The B2B Marketers Guide to Thinking Beyond A/B Testing

Deliver more conversions with less effort with ‘The B2B Marketers Guide to Thinking Beyond A/B Testing’.

Download the guide and be inspired by optimization strategies that blow A/B testing away.

If you’re striving for experimentation excellence, there’s no better teacher than learning from the best of the best. Whether you are looking for ways to increase on-site conversions, marketing attributable revenue, or testing velocity, the B2B Marketers guide to thinking beyond A/B testing will equip you to succeed.

Download this ebook to discover:

  • Why traditional A/B testing is not effective for B2B, and what you can do about it.
  • How to optimize for on-site and down funnel (pipeline & revenue) outcomes.
  • How to re-think your on-site optimization to drive more predictable revenue.

Get The Guide Below

Share a few contact details and we’ll send a download link to your inbox.

var policy = document.querySelector(‘.fe_form_blueprint ._html-code a’);
document.querySelector(‘.fe_form_blueprint ._checkbox-radio span label’).insertAdjacentElement(‘afterend’ , policy);

Trusted By

Trusted By

By |2025-05-12T04:36:43-07:00December 21st, 2017|A/B Testing, Full-Funnel Optimization|Comments Off on The B2B Marketers Guide to Thinking Beyond A/B Testing

Benefits of Personalization in B2B Marketing

Are you unsure about the benefits of B2B Personalization for your bottom line? The example below shows exactly how a few web changes in favor of personalization can increase dramatically increase revenue.

Personalization impact on Lead Value

In the case above, we’ve got an enterprise B2B SAS business with two defined two distinct groups: non-target accounts and target accounts.

Obviously, in an official account based marketing scenario you’re going to have multiple tiers, but for this example, we will use two.

What we’re assuming is that there’s a customer lifetime value of around $200,000 for your non-target accounts and $2 million for your target accounts. That’s roughly in line with enterprise SAAS businesses.

Let’s assume that both of those cohorts have a lead-to-close conversion rate of 1%.

That gives you a lead value on an LTV basis of $2,000 for your non-target accounts, and $20,000 for your target accounts. There is an obvious and significant difference there in lead value. Your target accounts are always going to represent a much higher lead value than non-target accounts.

If you’re not really focused on personalizing the experience of target accounts, you can assume that we have a higher percentage of leads that come in that are from those non-target accounts; let’s call that 80%, which means that the other 20% are from your target accounts.

From there, you can arrive at a weighted lead value, again at an LTV basis, around $5,600 for every lead that comes in.

Now if you are able to personalize the website experience for your most important accounts, you should be able to alter that mix and maybe even get it to 50/50, where 50% of the leads coming in are from your target accounts, and 50% are from your non-target accounts.

That’s obviously going to have a dramatic impact on the weighted lead value. As you can see here that’s $11,000, significantly more than when you’re not doing account based marketing and personalization.

Personalization impact on your Website

So what does that actually mean for your website? Well, again let’s assume some numbers here.

  • 200,000 monthly unique visitors and on average.
  • 1,000 of those are converting into leads
  • This gives you a conversion rate of half a percent.

As a marketing organization, you think, “Well we can do better than that. We want to invest in conversion rate improvements and we’re going to spend $200,000 to try and increase that conversion rate.” Let’s say with that investment, you’re successful.

You’ve invested in the team, in the tools, the technology to get you there, and as a result of that program, you see a 10% improvement in conversion rates.

Results of Not Personalizing

What does that actually mean for your business? In the scenario where you’re not doing personalization, you do see a pretty significant increase in new LTV that you generate. On a present value basis with some assumptions on a 3-year subscription length, you will add about $490,000 to the business. The ROI on that effort is 145%, which is great.

Results of Personalizing

In the case where you’re actually personalizing for target accounts, you can actually drive significantly more LTV into the business; almost well over 900,000 in this model with an ROI of 380%.

Investing in ABM increases ROI by 263%!

That’s because fundamentally you’re taking advantage of the fact that you’re able to drive more leads from those target accounts because you’re personalizing for them and you’re driving deeper engagement.

As you can see, that has more than double of your ROI that you can get through personalization. This is really the reason why companies that have significantly more potential in certain accounts, should be creating more personalized experiences on the website.

By |2025-05-12T04:36:42-07:00June 2nd, 2017|Full-Funnel Optimization|0 Comments

Pricing Page Conversion Tips

If you are a B2B SAAS business you should be spending a lot of time on your pricing page focused on iterative testing.

Pricing pages are vital for your bottom line. Those who end up there are late in the conversion funnel, and likely gathering the information to make a purchase decision, or at least consider that purchase.

Given its importance, you should be constantly A/B testing pricing pages, taking every incremental improvement as a win for your marketing department.

At a high level, pricing pages should be clear in terms of quickly getting users information to make a purchase decision and also not introduce additional points of friction. While simple in theory to follow, we see many of these fundamental rules not being followed here at FunnelEnvy.

As such, we have created a simple list of best practices you should follow to maximize conversions with your pricing pages.

Clear and Simple Pricing Tier Design

The specific pricing tiers on a pricing page is often an acute area of focus for testing, and somewhere that you might want to spend lots of time on.

Here are 3 helpful steps to follow with your pricing page design.

  1. Focus on clarity and simplicity.
  2. Ensure the design conveys the most critical information about those pricing tiers in order for the user to make a purchase decision.
  3. Keep the calls to action clear and differentiated enough to stand out.  

I’m going to use two of the classic B2B examples for reference points here. The first one is from HubSpot, the second one is from Salesforce. They do a great job at high converting pricing tiers.

Hubspot Pricing Page

SalesForce pricing page

In both of the examples above, it’s very clear as to what you are getting with the different pricing tiers. The CTA’s are contrasting in color versus the rest of the page, and the prices are clearly stated.

If you’re looking for test ideas and you don’t think your design hits the mark there, you might want to test alternative designs.

Align your pricing with buyer personas.

The idea here is to articulate your pricing tiers in a way that resonates with the customer’s perception of their own business. For example, ‘professional’ and ‘enterprise’ are both terms that both HubSpot and Salesforce use, and it’s very infrequently that people arrive on either of these pricing pages and get confused between which one they have to pick.

The pricing tiers are articulated and match how the user perceives their business. This helps the user not get stuck trying to choose between options.

Use a single core value metric

I recommend scaling your pricing tiers to a ‘single core value metric’. This is a single metric the user understands and reflects the value they get from the platform as they scale.

To take a HubSpot example, the pricing is based on a number of contacts, and the user intuitively understands that as they have more contacts in HubSpot, the pricing increases because the value to them increases as well.

So if you’re looking at your pricing pages and want to improve conversion there and find that your pricing tiers aren’t necessarily consistent with the buyer persona or you’re not clearly articulating a single core value metric, you may want to test reframing some of that pricing, rearticulating it, and comparing that to your current baseline.

Eliminate multiple audiences for a single page

Let’s take a look at the example above. Can you spot the problem?

Right here we’re looking at actually two different audiences being targeted, and two completely different buyer journeys being crammed into a single pricing page; a single web experience.

The basic and premium options are targeted towards small or medium-sized businesses and organizations, and the signup flow is self-registration; it occurs completely through the website.

But as you can see, they’re also trying to target enterprise customers through ‘request a demo’, and speaking to someone on the inside sales team that has a completely different experience and audience.

That should set off some alarms. If you’re in the business of driving conversions, competing calls to action, audiences, and buyer journeys on a single page does have a really negative impact on your conversion rate.

Solution: Optimize through personalization

How can we improve the experience of a pricing page with multiple buyer personas?

If you are able to identify the nature of that account or traffic coming into the site, you can show a unique experience for that specific audience.

For example, target SMBs based on previous browsing behavior and highlight the self-signup flow. Alternatively, through firmographic third party integrations, identify if a visitor is coming from an Apple or Microsoft, and really highlight the enterprise plan and the benefits for that customer.

Through personalizing the experience, the decision-making process is simplified for the visitor, and you’ll see an increase in conversions.

Clear Calls to Action

It’s really important on pricing pages to set clear expectations on the call to action as to what happens when the user clicks a button. In the HubSpot example, somewhat non-traditionally, they use two calls to action.

But in this case, they are differentiated both in design as well as the expectations. One is about trying it, engaging in the free trial. One is about signing up.

Salesforce is very clear that when you click on that button, you start the free trial process.

So if the expectation isn’t clear in your call to action language, that might be something that you want to test.

Offer human help

Now even if you get all of this right, you’re going to run into times where the user is still stuck staring at the pricing page and can’t make a decision. If you do have that happening, again, that’s really high-value traffic that you don’t necessarily want to lose.

If it’s worth it to you, you might want to test intervening with human help.

Offer to intervene, give them whatever information they need to potentially get that conversion, get them across the line!

There are a couple of ways you can go about this. Salesforce, after a couple seconds, pops up a modal window offering to put you in front of a live agent.

Alternatively, if you have chat already on the site, certainly you could pop that chat window when the user’s stuck.

Try a chat window by Olark.

Keep things simple

The reality is that people don’t read content, at best they skim. Furthurmore, no one wants to read a complicated piece of content.

The same concept goes for your pricing page. Keep it as simple and easy to understand as possible. You should be always asking your self ‘what can we do to simplify this page’.

Help Scout has one of the most simple pricing pages - grow your SaaS revenue with Stimulead

Look at the HelpScout pricing page above. It’s a simple price and sign up box. There is no confusion for the customer, the action for them to take is right in front of them.

Some companies need tiered pricing given the complexity of their offerings. Yet, if some A/B tests support increased conversions and your bottom line is not affected, consider dropping the multiple price points.

Highlight your unique values

Many times users are comparing various vendors for a specific product across some specific criteria. Once you know that specific criteria for your product, make it very obvious. If you know

If you know price is a big factor, you should it as the first thing people see on the page. If you are a storage company like Box, make the storage limits one of the first things people see.

Box pricing page

In many instances people don’t even know what they want. In the classic marketing book ‘Predictably Irrational’ by Dan Ariely he states, “Most people don’t know what they want unless they see it in context”. For example, people don’t know what basketball shoe they should buy until they see it on a professional. 

Same goes for marketing a pricing page. Many people won’t know what are the important factors to consider. It’s up to you spell it out for your customers and make the most important features front and center.

Suggest Plans to Users

It can often times be beneficial to push users towards a certain plan. By doing so you are reducing purchase friction and making it easy for users to make a decision.

Spotify pricing page

Now the question becomes which plan to recommend. There is no clear cut answer to this question, but here are a few things to consider.

  • Consider data known data about a user. Using the FunnelEnvy platform, you can bring in Marketo or Salesforce data into your pricing page tests. This gives you the ability to personalize the pricing page depending on the user/accounts. As a basic test, try changing the recommendation based on company size!
  • Recommend the most expensive option. While it is unrealistic that the majority of people chose that option, you can expect people to choose the second most expensive. This is advantageous if people are often opting for your least expensive option, and you want to incentivise them to upgrade.
  • Free Trial. While the free trial isn’t going to bring you the most money up front, it does have the least amount of friction. If you can work out the numbers, pushing people to a free trial might be your best option.

Address Fears

In the world of sales there is the concept of FUD. This stands for fear, uncertainties, and doubts. These FUDs come out when a customer is making a purchase, the bigger the purchase the greater opportunity for it to arise.

The best way to address FUD is to address it up front on the pricing page. To get started, write down all the possible fears, uncertainties, and doubts that people might have. Then write down your response to how to handle the FUD.

Here are some examples.

  1. Customer: I won’t get the ROI from this software. Response: “Average customers increase revenue by 34% with the software.”
  2. Customer: The software will be difficult to implement. Response: “We offer free implementation support to guarentee you get up and running”.

See how the customer will feel much more at ease after their fears are addressed.

SumoLogic Pricing Table

See how SumoLogic offers a free trial to reduce fears that users won’t get any value back from signing up.

Show Validation

It’s important that customers know other companies use and have success with your software. This validation helps to reduce fear for the customer about spending their money.

The easiest way to show validation is through quotes. See below how SumoLogic reduces fear that users won’t be able to get the software into production by showing a quote.

Customer Quote from SumoLogic

If you want to have the most relevant quote for a particular visitor, we recommend using personalization to show a quote based on a user industry. For example, if you know a visitor works in the travel industry, you could show a quote from someone at AirBnB.

Incentives Longer Commitments

It’s always beneficial to have customers pay for year-long commitments up front. This keeps them locked in for longer, and make it impossible to stop after only a month. While there might be some conversion drop-off for those who don’t want to pay at once, the overall benefit to your business will be tremendous.

Let’s take a look at some examples.

In this example from Dropbox, you can see that the pricing defaults to annual billing.

Dropbox Annual Billing

However, you can also switch to monthly billing, although it costs more per month.

Dropbox Monthly Billing

The goal here is to push people into the yearly billing and longer commitment.

Below is a similar example from Sumo. In this pricing page, they subtly say “paid annually” under the monthly price. While it can be construed as a ‘dirty trick’ to make someone think they are paying a monthly rate, this practice is quite common with pricing pages.

AppSumo Pricing Page

 

Conclusion

I hope you can use some of these tips in your own pricing pages. If you have any questions about how to best optimize your pricing page, please let us know in the comments, and we’ll be sure to give you some tailored suggestions.

By |2025-05-12T04:36:42-07:00May 11th, 2017|Full-Funnel Optimization|0 Comments
Go to Top