Last week leading A/B testing company Optimizely held their first conference in San Francisco, OptiCon 2014. In addition to several exciting announcements from the company, OptiCon brought together leading Conversion Rate Optimization and digital marketing professionals to share their experiences and learn from one other. As tools like Optimizely make the tactics of testing easier and more accessible, the challenge shifts to the process and organizational challenges that need to be overcome to build successful optimization programs. These seemed to be the dominant themes from the sessions that I attended, which I’ve summarized along with some key takeaways here.

OptiCon

Session 1: Be Relevant: How Optimization Helps You Know Your Audience

Dave Nuffer (Liftopia), Alissa Polucha  (Microsoft Stores)

If paid search (SEM) is a significant component of your acquisition strategy, having a high degree of relevance in the user experience from ad impression all the way to conversion is critical to your ability to convert. Dave spoke about the reducing distractions and focusing on deals on landing pages (20% lift), but also a more complex testing scenario, where they wanted to test multiple funnels and keep users within those funnels based on their ad keyword (24% lift).

Alissa added four points for bringing more relevance to optimization:

  1. Focus on the problem, not the solution – Use insights from customers to develop hypotheses.
  2. Experimentation never ends – Continue to iterate and retest.
  3. Know your audience – Segment and target key markets
  4. Start at the action – Testing at the end of the funnel can lead to more significant business value.

Key takeaways: It’s essential to understand your audience (and sub-audiences) and structure your optimization and testing efforts to provide a more relevant experience for them.

Session 2: Is This Normal? Common Challenges Testing Programs Face

Ryan Garner (Clearhead),  Jessica Vasbinder (Warner Music Group)

Many testing and optimization groups face similar challenges – generating & prioritizing ideas, identifying appropriate resources and communicating results. In terms of idea generation, Ryan and Jessica discussed looking at multiple channels, both internal (e.g. products and customers) as well as external (user testing, competitors and vendors). Appropriately prioritizing these testing ideas is incredibly important if you want to maximize the returns from your testing program, and the speakers recommend a simple scoring system which balances level of effort of testing and implementation, perceived accuracy of the hypothesis, and business impact.

Tests have the potential to break the site. Although it’s important to diligently QA tests before launching, Ryan and Jessica rightly mentioned that the testing process should iterate much faster then  product development, and that risk and reward are often correlated with testing. So they recommend using more talented experienced developers and designers to build your test. Finally they addressed the challenges in reporting results back to management, and cautioned against creating future growth projections purely from experimentation results.

Key takeaways: Improving your testing process to incorporate and accurately prioritize a broad base of ideas, iterating rapidly with experienced resources and managing expectations on test results will mitigate many of the common challenges A/B testing programs face.

Session 4: How to Increase Your Testing Success by Combining Qualitative & Quantitative Data

Hiten Shah (KISSMetrics)

Conversion Rate Optimization is inherently analytical and quantitative. Getting completely absorbed in the numbers however is a common trap, and it’s often the qualitative insights that can either help you come up with great testing ideas or discover “why” the observed tests results occurred. In Hiten Shah’s talk he focused on the idea that “the hardest thing to do is to come up with a good test” and using qualitative sources such as User Testing, CrazyEgg, Qualaroo, and SurveyMonkey in conjunction with quantitative data. He also offered 5 tactical suggestions to improve your testing:

  1. Find out why people are not converting – Ask the right questions and try to get feedback at the point of non conversion or cancellation.
  2. Use the exact words that your customers do
  3. Start with a hypothesis and then prioritize – Have key questions for each experiment and a framework for prioritization.
  4. Create experiments that are small – Rather expending huge amount of developer and testing resources into a single test, see if you can run smaller experiments to incrementally learn.
  5. Continuously experiment – keep iterating in small steps and “pass the baton”

Key takeaways: Coming up with good tests is essential but hard. Combine numerous sources of evidence (qualitative and quantitative) to design your next test. Test incrementally and continuously to minimize waste.

Session 4: Fail and Win: Why a Failed Test Isn’t a Bad Thing

Caleb Whitmore (Analytics Pros)

Everyone who has done even a minimal amount of A/B testing has had a “failed” test. It’s actually not uncommon to see statistically significant improvements from only 1 out of every 8 tests. In my opinion if you’re learning something from those other 7 tests it’s a disservice to label them ‘failed’, and in this session Caleb Whitmore discussed some other reasons why failed tests are beneficial. First is the idea that actually launching and running a test is a victory in and of itself, and represents the organizational commitment towards improvement by measurement and experimentation. If an outcome is measured the organization has learned something, especially if you have a complete picture (via analytics) of what happened during the cours of the test. Often if you dig deep enough into the data you’ll find some interesting results that can inform your next test.

Key takeaways: Just because a test doesn’t result in the desired conversion lift doesn’t mean it is without value. Designing, launching and running the test reaffirms the process, and often by digging into the data after the test you’ll find great insights that will ultimately result in a successful test.

Session 5: Conversion Optimization: The World Beyond Headlines and Button Color

Patrick McKenzie (Kalzumeus Software)

Often when we see organizations try to incorporate a testing-based culture they will start with some small scale experiments (to understand and integrate the process), and move on to bigger tests as they get more comfortable. Based on years of experience as an CRO consultant for Software-as-a-Service businesses, Patrick McKenzie explored some of the more impactful A/B tests that he has run (including one for Khan academy that will give millions more disadvantaged students access to education). Some of these testing ideas that he explored were:

  • Product / Pricing Grid – Almost every subscription / SaaS business has them – and test price points, cutoff levels can have a dramatic bottom line impact.
  • Payment Methods / Checkout Flow – This is where the rubber meets the road for ecommerce, and consequently deserves a lot of optimization effort. Just be sure to go through it as a real customer and don’t break it.
  • Offline Business Processes – Patrick offered the example of Atlassian, which continuously measures offline vs online sales processes (and optimizes for online). Lead conversion by the sales team is also something that can be tested and optimized.
  • Onboarding – Users who just signed up are enthusiastic and love great onboarding experiences. Get them to success as soon as possible, and make it easier for them to engage their team members.
  • Core Product Interactions – Test the user experience of the areas where your users spend the majority of their energy and time.

Key takeaways: Getting to the point where you can test the low-hanging fruit is necessary. To go beyond that seek out testing opportunities that can have the most dramatic business impact.

Final Keynote: Why Great Marketers Must be Great Skeptics

Rand Fishkin (Moz)

Conversion Rate Optimization has roots in the scientific method with significant emphasis on experimentation and evidence. Rand Fishkin explored this through three  marketing archetypes – the crap skeptic, good skeptic and great skeptic. Leaving aside the crap skeptic (who blindly adopts changes without testing) the most important distinction he made is between the good and great skeptic. The good skeptic (and marketer) will always try to test assumptions and build evidence, but it’s the great skeptic (marketer) that understands that correlation does not imply causality and therefore seeks to understand the reasons behind the results. In trying to find those reasons the great skeptic asks the right questions to and develops a better understanding of the target customers. Fishkin also reiterated the importance of prioritization – great marketers will always be asking is this the most important thing we could be testing right now?

Key takeaway: Be a great skeptic. Look beyond the superficial test results and push to find causal reasons behind the data. Prioritize all of these efforts in a broader business context.

OptiCon 2014 was a educational and inspiring day full of great discussions about effective testing and optimization. As more organizations build optimization programs and experience success it will be interesting to watch the discussion evolve in future OptiCons. See you next year!