B2B marketers currently face three main challenges with website experimentation as it is currently practiced:
- It does not optimize the KPIs that matter well. – Experimentation does not easily accommodate down-funnel outcomes (revenue pipeline, LTV) or the complexity of B2B traffic and customer journey.
- It is resource-intensive to do right. – Ensuring that you are generating long-term and meaningful business impact from experimentation requires more than just the ability to build and start tests.
- It takes a long time to get results. – Traffic limitations, achieving statistical significance and a linear testing process makes getting results from experimentation a long process.
I. KPIs That Matter
The most important outcome to optimize for is revenue. Ideally, that is the goal we are evaluating experiments against.
In practice, many B2B demand generation marketers are not using revenue as their primary KPI (because it is shared with the sales team), so it is often qualified leads, pipeline opportunities or marketing influenced revenue instead. In a SaaS business it should be recurring revenue (LTV).
If you cannot measure it, then you cannot optimize it. Most testing tools were built for B2C and have real problems measuring anything that happens after a lead is created and further down the funnel, off-website or over a longer period of time.
Many companies spend a great deal of resources on optimizing onsite conversions but make too many assumptions about what happens down funnel. Just because you generate 20% more website form fills does not mean that you are going to see 20% more deals, revenue or LTV.
You can get visibility into down funnel impact through attribution, but in my experience, it tends to be cumbersome and the analysis is done post-hoc (once the experiment is completed), as opposed to being integrated into the testing process.
If you cannot optimize for the KPIs that matter, the effort that the team puts into setting up and managing tests will likely not yield your B2B company true ROI.
II. Achieving Long-term Impact from Experimentation is Hard and Resource-intensive
At a minimum, to be able to simply launch and interpret basic experiments, a testing team should have skills in UX, front-end development and analytics – and as it turns out, that is not even enough.
Testing platforms have greatly increased access for anyone to start experiments. However, what most people do not realize is that the majority of ‘winning’ experiments are effectively worthless (80% per Qubit Research) and have no sustainable business impact. The minority that do make an impact tend to be relatively small in magnitude.
It is not uncommon for marketers to string together a series of “winning” experiments (positive, statistically significant change reported by the testing tool) and yet see no long-term impact to the overall conversion rate. This can happen through testing errors or by simply changing business and traffic conditions.
As a result, companies with mature optimization programs will typically also need to invest heavily in statisticians and data scientists to validate and assess the long-term impact of test results.
Rules-based personalization requires even more resources to manage experimentation across multiple segments. It is quite tedious for marketers to set up and manage audience definitions and ensure they stay relevant as data sources and traffic conditions change.
We have worked with large B2C sites with over 50 members on their optimization team. In a high volume transactional site with homogeneous traffic, the investment can be justified. For the B2B CMO, that is a much harder pill to swallow.
III. Experimentation Takes a Long Time
In addition to being resource intensive, getting B2B results (aka revenue) from website testing takes a long time.
In general, B2B websites have less traffic than their B2C counterparts. Traffic does have a significant impact on the speed of your testing, however, for our purposes that is not something I am going to dwell on, as it is relatively well travelled ground.
Of course, you do things to increase traffic, but many of us sell B2B products in specific niches that are not going to have the broad reach of a consumer ecommerce site.
What is more interesting, is why we think traffic is important and the impact that has on the time to get results from testing.
You can wait weeks for significance on an onsite goal (which as I have discussed, has questionable value). The effect that this has on our ability to generate long term outcomes, however, is profound. By nature, A/B testing is a sequential, iterative process, which should be followed deliberately to drive learnings and results.
The consequence of all of this is that you have to wait for tests to be complete and for results to be analyzed and discussed before you have substantive evidence to inform the next hypothesis. Of course, tests are often run in parallel, but for any given set of hypotheses it is essentially a sequential effort that requires learnings be applied linearly.
This inherently linear nature of testing, combined with the time it takes to produce statistically significant results and the low experiment win rate, makes actually getting meaningful results from a B2B testing program a long process.
It is also worth noting that with audience-based personalization you will be dividing traffic across segments and experiments. This means that you will have even less traffic for each individual experiment and it will take even longer for those experiments to reach significance.
Achieving “10X” improvements in today’s very crowded B2B marketplace requires shifts in approach, process and technology. Our ability to get closer to customers is going to depend on better experiences that you can deliver to them, which makes the rapid application of validated learnings that much more important.
“Experimentation 1.0” approaches gave human marketers the important ability to test, measure and learn, but the application of these in a B2B context raises some significant obstacles to realizing ROI.
As marketers, we should not settle for secondary indicators of success or delivering subpar experiences. Optimizing for a download or form fill and just assuming that is going to translate into revenue is not enough anymore. Understand your complex traffic and customer journey realities to design better experiences that maximize meaningful results, instead of trying to squeeze more out of testing button colors or hero images.
Finally, B2B marketers should no longer wait for B2C oriented experimentation platforms to adopt B2B feature sets. “Experimentation 2.0” will overcome our human limitations to let us realize radically better results with much lower investment.
New platforms that prioritize relevant data and take advantage of machine learning at scale will alleviate the limitations of A/B testing and rules-based personalization. Solutions built on these can augment and inform the marketers’ creative ability to engage and convert customers at a scale that manual experimentation cannot approach