Checkout Usability Study Methodology

This study examines the user’s checkout experience. It specifically looks at how sites can improve the shopping cart and checkout flow to ensure fewer users abandon their purchase. This study includes areas such the shopping cart, users privacy concerns, form field usability, gifting features, the flow and layout of checkout pages, 3rd party payments, validation errors, etc.

This study contains the research results from 7 years man-years’ worth of large-scale e-commerce testing of only checkout flows by the Baymard Institute. More specifically, it’s based on:

  • 2 rounds of 1:1 moderated qualitative usability testing (in lab) with a total of 272 test subject/site sessions following the “Think Aloud” protocol.
  • 1 large-scale eye-tracking study with 32 participants.
  • 2 rounds of checkout benchmarking of 850+ checkout steps.
  • 4 quantitative studies with a total of 6,052 participants.

Below the methodology for each of the research methods is described in detail.

To purchase access to the Checkout Usability Report & Benchmark go to:

Usability Testing Methodology

One part of this research is based on two rounds of large-scale usability testing of 25 major e-commerce sites, with a total of 272 test subject/site sessions following the “Think Aloud” protocol. The usability study tasked real users completing the purchase for multiple different types of products – all the way from “shopping cart” to “completed sale”.

The sites tested in the two rounds of 1:1 qualitative think aloud test sessions were:

  • Round 1: Wayfair, ASOS, Walmart, Amazon, American Eagle Outfitters, Crate&Barrel, Overstock, and Home Depot.
  • Round 2: 1-800-Flowers, AllPosters, American Apparel, Amnesty Shop, Apple, HobbyTron, Levi’s, NewEgg, Nordstrom, Oakley,, PetSmart, Thomann, and Zappos.

Each test subject tested 5 - 8 checkouts, depending on the type of task and how fast they were. The duration of each subject’s test session was ~ 1 hour long, and the subjects were allowed breaks between each site tested.

During the test sessions the subjects experienced 2,700+ usability issues specifically related to the checkout flow and design. All of these findings have been analyzed and distilled into 134 specific usability guidelines on how to design and structure the best possible performing checkout flow.

Since there will always be contextual site differences, the aim of this study is not to arrive at statistical conclusions of whether 61% or 62.3% of your users will encounter a specific issue. The aim is rather to examine the full breadth of the user’s checkout behavior, and present the issues which are most likely to cause checkout abandonments. And as importantly, to present the solutions and checkout design patterns that during testing were verified to cause a high performing checkout flow.

Eye-Tracking Testing

The eye-tracking test study included 32 participants using a Tobii eye-tracker, with a moderator present in the lab during the test sessions (for task and technical related questions only) which took approx. 20-30 minutes. All eye-tracking test subjects tested 4 sites: Cabelas, REI, L.L. Bean, and AllPosters. The eye-tracking test sessions started by landing the test subjects at a product listing page, asking them to, for example, “find a pair of shoes you like in this list and buy it”.

The eye-tracking subjects were given the option to use either their personal information or a made-up ID handed on a slip of paper. Most opted for the made-up ID. Any personal information has been edited out of the screenshots used in this report or replaced with dummy data. The compensation given was up to $50 in cash.

Benchmarking Methodology

The other part of this research study is a comprehensive usability benchmark. Using the 134 checkout usability guidelines from the large-scale usability tests as the review heuristics and scoring parameters, we’ve subsequently benchmarked the checkout flow at 50 top grossing US e-commerce sites. This has resulted in a benchmark database with more than 6,400 checkout usability parameters manually reviewed, 5,100+ additional examples for the 134 guidelines, and 380 checkout page examples from top retailers, each annotated with review notes.

The total UX performance score assigned to each benchmarked sites is essentially an expression of how good (or bad) a checkout user experience a first-time user will have at the e-commerce site – based on the 134 guidelines documented in the Checkout Usability report.

The specific score is calculated using a weighted multi-parameter algorithm:

Below is a brief description of the main elements in the algorithm:

  • An individual guideline weight: A combination of the Severity of violating a specific guideline (either Harmful (worst), Disruptive or Interruption, as defined in the usability report), and the Frequency of occurrence of the specific guideline (i.e. how often the test subjects experienced it during the usability study).
  • A Rating describing to which degree a specific site adheres to each guideline (Adhered High, Adhered Low, Neutral, Issue resolved, Violated Low, Violated High, N/A).
  • The scores are summarized for all guidelines, and then divided by the total number of applicable guidelines (to ensure “N/A” does not influence the score).
  • The Highlights marked at the site screenshots are specific examples that the reviewer judged to be of interest to the reader. It’s the site’s overall adherence or violation of a guideline that is used to calculate the site’s usability score. Thus, you may find a specific Highlight that shows an example of how a site adheres to a guideline, even though that same site is scored to violate the guideline (typically because the site violates the guideline at another page), and vice versa.
  • Lastly, the score is normalized with a fixed multiplier for the benchmark study. The normalization is based on a “state of the art” implementation, to equal a total UX score of 1000. This normalization enables year-over-year and cross-study comparison.

All site reviews were conducted by Baymard employees in Q2 2016. A US-based IP address was used. In the case multiple local or language versions of a site existed, the US site version was used for the benchmark.

All reviews were conducted as a new customer would experience them - hence no existing accounts or browsing history were used. The shortest path through the checkout was always the one benchmarked (e.g. the “guest checkout” option). The documented and benchmarked designs at each site were: shopping cart, account selection, shipping address, shipping methods, billing address, payment methods, order review, order confirmation steps, along with order confirmation email, gifting flows, and validation error experiences.

Quantitative Studies

The quantitative study component is in the form a 4 quantitative studies or tests. The studies each sought answers on:

  • Reasons for checkout abandonments, privacy concerns and testing CAPTCHA error rates (2 studies, 2,549 participants, US participants recruited to match approximate US internet demographics, recruited and incentivized through Google Consumer Insights and Survey Monkey Audience’s),
  • Testing site seal and SLL logo trust levels (2,510 participants, US participants recruited to match approximate US internet demographics, recruited and incentivized through Google Consumer Insights)
  • A/B Testing two different versions of ‘free shipping tiers’ designs (993 participants split into two groups, US participants recruited to match approximate US internet demographics, recruited and incentivized through Google Consumer Insights).

Besides these 4 main sources, select test observations and sessions from our other usability studies are included, primarily from Baymard’s Mobile E-Commerce Usability study.

Baymard Institute provide this information “as is”. It is based on the reviewers subjective judgement of each site at the time of testing and in relation to the documented guidelines. Baymard Institute can’t be held responsible for any kind of usage or correctness of the provided information.

The screenshots used may contain images and artwork that are both copyright and trademark protected by their respective owners. Baymard Institute does not claim to have ownership of the artwork that might be featured within these screenshots, and solely capture and store the website screenshots in order to provide constructive review and feedback within the topic of web design and web usability.

Citations, images, and paraphrasing may only be published elsewhere in limited extend, and only if crediting “Checkout Usability study by Baymard Institute,

Close overlay