Facebook and OKCupid Experiments: Ethics, Conversion Testing, and UX Design

The headlines have recently been flooded with the news that Facebook and OKCupid have been conducting experiments on their users in an effort to collect data and improve the overall user experience.

Analysts and commentators from the public at large have raised the same concerns: is this ethical? Does running certain types of testing violate user trust, especially when the impact is carried off your website and into the “real world?” Where do companies draw the line?

While you shouldn’t let fears dominate your testing agenda, it’s helpful to have a clear idea of where your company stands on these issues and how that impacts what you disclose to customers and website visitors. Here’s a closer look at some of the ethics of testing and what these recent case studies can teach us.


What can conversion testing teach us about dating? Image credit: Flickr user Sergey Sus

Privacy concerns and their impact on conversion testing

It’s impossible to have a discussion about data collection and testing without addressing the topic of user privacy.

Testing and data tracking, to one extent or another, raise ethical and privacy concerns about your customers’ information. The EU has already taken a strong stance by requiring websites that use tracking cookies to display a prominent notice for users.

In traditional market research, you’re asking a survey respondent to opt into your research. You might be trying to influence their response on some level, by using an incentive or a script designed to encourage their participation.

But today’s analytics programs and tracking programs usually measure everything from demographic data to behavioral information in an under-the-surface, unobtrusive way. Largely, this information is analyzed in the aggregate. Even if you’re looking at specific consumer segments, an individual user’s information is protected.

While a test for a different website layout or new copy might change smaller details or encourage a user to take action, an individual visitor’s experience isn’t drastically affected nor is her personal agency removed in any decision being made. But that doesn’t mean that you shouldn’t consider the ethics of conversion testing, and determine where your company’s policy stands on key issues. As we’ll explore below, the public backlash to testing can have an impact on your brand.

Deliberately changing the user experience

Facebook publicly announced in mid-June that it was testing outside its usual scope of conversion and user testing agenda. The social media giant is notorious for regularly testing out new models for its newsfeeds, including what ads users see and who on their friends list get displayed with regularity.

A certain amount of testing is, I think it’s fair to say, implied when you opt into the social media experience. Studies show that social media sites are designed to be increasingly addictive. In a sense, this demonstrates a highly effective testing program that is rapidly formulating hypotheses, designing tests, rolling them out, and incorporating findings into UX design.

But when news broke that Facebook had manipulated the emotional content of almost 700,000 people users to see whether emotions or moods were contagious, reactions were mixed.

The study question was simple: if the posts you saw were manipulated to all be positive or negative, would that impact the tone of the updates that you wrote? In other words, would depressing content make you depressed or joyful content make you happy? The study found that yes, there is a correlation between the tone of what users see and their own subsequent moods.

The study’s lead researcher published a public apology, and explained that the study was designed to answer questions about whether positive content made users feel bad — as we so often see portrayed in the media — or whether negative updates caused users to avoid Facebook. Many felt that the study crossed an ethical boundary, and both the FTC and the EU have launched investigations. The New York Times pointed out an important factor: typically academic research of this nature would require getting study participants’ consent. Instead, Facebook relied on users’ agreement to their terms and conditions when signing up for the site. Was that enough? Public and legislative reactions have been conflicted.

It raises important questions for sites and testers: do you have the minimum disclosures in place on your site? And does any of the testing that you’re conducting suggest that you need to disclose anything more clearly to users?

In large part, standard conversion testing doesn’t require you to take special steps. But if you’re running more in-depth projects, it’s worth at least examining the question.

Testing impacts brand perceptions

Not long ago, I was following a well-orchestrated launch of a new workout program from a major name in the fitness industry.

Part way through the launch, he published an interesting newsletter to his email list. He addressed the fact that customers were sharing feedback on social media and with his customer service team, expressing frustration with pricing. One customer bought the product for $49, while another paid $69. People wondered if it was a technical glitch or some other error. It turned out to be part of a price testing program, designed to see what price point got the most people to convert to ultimately determine the final pricing for the product.

Unfortunately, because of the way that it was handled, long-time customers felt deceived. People who had paid $20 more for a product than others on the same day were left with a sour taste. The fitness company offered refunds to people that paid the higher price, but stood behind its statement that this was a standard marketing practice.

If you’re conducting tests, particularly price testing, it’s important to structure your deployment carefully. The damage to your brand and customer trust can be a real byproduct if you don’t.

Off the screen, into your “real” life

OKCupid and other data sites rely on proprietary algorithms that use the data you supply in your profile, the answers to screening questions, and your behavioral history to help determine who you are likely to click with. Do you routinely go for blondes or dates over six feet tall? Whatever patterns emerge from your collective data, the idea is that the results sites display to you will become increasingly focused and on target over time.

Recently, OKCupid released the results of three user tests that it conducted. In the first, user profile images were hidden.

The second test hid selected profile details to see how it would impact user perceptions and personality ratings. In a third test, users were told that certain people were determined to be a better or worse potential date than its software had actually scored them.

In a blog post, the company’s CEO was unapologetic about the testing, “If you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”

He goes on to explain the role of testing in building a great product, especially online:

“But OkCupid doesn’t really know what it’s doing. Neither does any other website. It’s not like people have been building these things for very long, or you can go look up a blueprint or something. Most ideas are bad. Even good ideas could be better. Experiments are how you sort all this out.”

Largely, of course, he’s right. And interestingly, the OKCupid news didn’t engender the same fury that the Facebook revelations did. Yet one of the research questions driving the OKCupid experiment was to understand how effective their software really is. Typically, OKCupid measures effectiveness based on how often people respond to messages, how long a conversation goes, and the exchange of contact information. They decided to test out their effectiveness more systematically.

The company told “bad matches” (with about 30% compatibility) that they were actually 90% compatible.

Messaging increased and the results showed that the simple suggestion of compatibility moved the conversation forward. OKCupid based this on measures like minimum number of messages exchanged and whether contact information was given.

But what would happen if these meetings got taken offline?

I’m imagining the worst-case scenarios: a fervent vegan inadvertently goes to dinner with a trophy hunter; a Democrat on the market for another Democrat ends up in an awkward political debate with a diehard Libertarian. It’s just one date and, no doubt, some people were surprised to connect successfully with people they’d never have gone for otherwise. It was their time, expectations, and trust of the brand that ultimately may have been impacted.


Each of these case studies illustrates one major point: it’s possible that the testing you’re conducting could publicly impact your brand.

Your customers, the media, or the public at large could one day be scrutinizing the way that you collect and use data. It’s a useful thing to remember, when you’re developing and implementing tests.

One of the biggest takeaways is that while it’s imperative that your privacy policy and user terms clearly state that you may be tracking data or conducting testing, it may not be sufficient for the testing that you have in mind. Particularly in contexts where you plan to take your testing beyond basic analytics or into the realms of customers’ wallets and offline lives, it’s important to think about your disclosure policy.

Could your actions have implications to your customers’ lives or ongoing relationship with your brand that you need to consider? Are you comfortable that you could reasonably state to a customer that you took their privacy concerns seriously and had sufficiently disclosed your intentions?

Ultimately, you’re conducting testing to develop a better product and help connect with those people who most urgently need your information, products, or services. However, having a clear ethical stance on testing for your company and how that data will be handled is an important step for any company considering conversion testing.

Taking the time to clarify your position and thinking through the implications of a testing plan will help save you from embarrassing brand gaffes and customer concerns down the road.

By | 2014-08-16T15:30:04+00:00 August 12th, 2014|Testing|5 Comments

About the Author:

I'm a tech entrepreneur who has been building, measuring and selling consumer and enterprise websites for years. Over the course of my career I've helped companies large and small increase revenue and engage customers as a manager, advisor and consultant.
  • Aline Johnson

    Wow. Thanks for tackling this tough topic. Dealing with the ethics of testing is a necessary, but somewhat sticky subject to manage. I haven’t seen this tackled from this angle before. It took guts to go there, and I’m glad you did. I’ll be asking my team to read this piece.

    • Thanks Aline, I appreciate the kind words. It would be great to hear their feedback!

  • Robert

    Very thought provking post. When I read the news coverage on Facebook, it definitely got me to stop and think. While most of the testing that we have done is in the realm of what you would probably say is covered by a basic user agreement, some of our pricing experiments have probably crossed the line. We actually had the issue you talk about with the fitness guy, where some of our customers saw different prices. It was embarrassing and took a lot of time to deal with and correct. Since then we have managed our pricing by just reducing or doing scheduled price increases. Do you have any other suggestions on how to manage price testing without hitting any nerves?

    • Thanks for reading and for your comment Robert!

      In terms of price testing I almost always recommend *not* doing it in parallel. Especially in these days of multiple devices (where the same person could come back on another device and see a different price) it just raises too many issues. That being said testing the effect of different prices can have a massive revenue impact – so it’s a really important lever to test. You’re probably best off doing it sequentially and I’d also recommend being deliberate about it, using historical data to guide you and making sure you’re measuring the effect across the entire funnel (including churn if you have a subscription business).

      Even if you’re cautious about it you should still be sensitive to your customers. If it’s a recurring business model you’ll probably have to grandfather in existing customers (for a price increase) or reduce it (decrease). And you may also have to accommodate some refunds if your margins allow it.

      Hope this helps!


    It is indeed a rather pertinent issue….facebook has nothing on this guy…https://www.youtube.com/watch?v=4S_j7JYTJTY…. iwas a homel;ess street drinker when this man coerced me into giving him my biometric data for a fiver for some food—he was scanning prostitutes and other vulnerable people too….calling it “eyetracking” when in reality it was an AI trying to manipulate eye movements—basically a rudimentary hypnosis technique….i had my suspicions so i totally screwed with the data—i kept running a line from left o page centre with my eyes to the top left corner of the page where i drew a cross with my eyes–as i realised the computer wasnt expecting me to look there…i made sure i could recognise the data if i ever saw it again—hence proof that it could not be “anionymised” you cannot anonymise stuff a machine learning algo has learned—its impossible….you have to delete the whole data string…i knew this but mr john dodd didnt realise i did…..so he published it as “groundbreaking “new” data….here

    http://thinkeyetracking.com/2010/10/the-truth-about-tropicana-or-how-to-save-100000000-using-neuro-market-research/ —with a billion dollar price tag.

    whats even dumber he gave it away….i am now a champion football tipster stalked by betting companies who embed this data to their tracking cookies to rig whole markets….donrt believe me? foll the white rabbits foot….@bunnyfoot biometric data farming company —here


    and the anonymised data??? with the cross in the top left corner that i put there—the randomised eye flitting that i put there(turns out it wasnt so random but what is) hence it presented a “new heatmap”–i was told it was not being published , was sanctioned by tropifcana juice and would be deleted…. no it was posted on the web as apparently a twender for business to try to scam more companies into thinking hypnosis an effective marketing technique……this is the biggest scam since the internet began and noone is listening yet….its all reall….there are names there are dates there are victims…im speaking up….after being perpetually robbed of propietary billion doollar soccer and basketball market data….match rigging market rigging crowd sourced big data stalking.. pull the thread on this one? might really unravel some shit…
    i wouldnt know ask john dodd….and robertstevens who apparently programmed the hypnotic and predatory AI that was first being used for shock marketing and now is being used to “STROKE YOUR BRAIN!”. they deleted the videos of mind control man but they popped back up when i told em i had copioes:)) enjoy….this will cauuse firteworks as soon as someone pulls that string…..me im just a concerned cirtizen who though he was going mad as in 2010 web pages started to seem like they were dirtectly target ing me…i wasnt go mad…they were!

    yeah im angry but who else got scanned??? who else got tormented with hypnotic ads based obn their own biometric data??? stolemn by these men and copied by evey tracking cookie company on the planet….

    so unfortunately because of the cross drawn in the top left corner of the third “heat map” down -that i put there and was supposed to never see ever again??? let alone be actively manipulated by via adverts—none of which i ever clicked—-it just bombarded me with loans ads after about three years ansd i figured it out…
    went looking fo r the dta and found it…with the cross and the line pattern and the extra randomised dancing eyes points just as i remeber inputting it.

    my vectors were unique–publishing in any way unannonym,ised them… they defrauded me, lied, and have profited greatly in repuatation and cash from stakllking unwitting citizenns and farming them for market insights…in my case the football markets…i cant bet online anymore as the spys just hack the shit out of me to get to my data…where theres money??? you find shit sniffing psychoopaths… social duty time….fuck the money this is about stalkers.