The debate was about the best color for the toolbar on the webpage. The design team was fond of a particular shade of blue while the product manager was advocating for a greener hue. Both parties had strong opinions about their choice. Who gets to decide? Was the choice right? And does it really matter anyway?
Decisions like this are often made based on diplomacy, authority, or opinion. The debate recounted above is an often retold tale from Google, and the story has endured because the team eventually tested 41 gradations of blue to see which users preferred. Why? It’s about more than usability or user experience. Whether or not a design choice leads to clicks can have an impact on a revenue stream. Companies like Google know the importance of conducting experimentations like A/B testing to determine the right approach with data—not an opinion or a guess.
Whether the goal is to improve a landing page or a call-to-action button, A/B testing is the best way to help UX teams and marketers make incremental changes over time. A well-designed A/B test will help the team decide between two buttons, two fonts, or even two microsites. A/B tests tell us what’s not working, and what is successful, rather than merely what has the potential for success. In short, the results from A/B tests can lead to informed decisions based on data, and not just opinions.
So what holds some people back from doing it? Misconceptions about the complexity, fear of number crunching, or not understanding the testing possibilities can be roadblocks to informative experimentation. In this article, I’ll demystify A/B testing and provide basic steps for getting started with simple tests.