This study examines the user’s checkout experience. It specifically looks at how sites can improve the shopping cart and checkout flow to ensure fewer users abandon their purchase. This study includes areas such the shopping cart, users privacy concerns, form field usability, gifting features, the flow and layout of checkout pages, 3rd party payments, validation errors, etc.
This study contains the research results from 7 years man-years’ worth of large-scale e-commerce testing of only checkout flows by the Baymard Institute. More specifically, it’s based on:
Below the methodology for each of the research methods is described in detail.
To purchase access to the Checkout Usability Report & Benchmark go to: baymard.com/checkout-usability
One part of this research is based on two rounds of large-scale usability testing of 25 major e-commerce sites, with a total of 272 test subject/site sessions following the “Think Aloud” protocol. The usability study tasked real users completing the purchase for multiple different types of products – all the way from “shopping cart” to “completed sale”.
The sites tested in the two rounds of 1:1 qualitative think aloud test sessions were:
Each test subject tested 5 - 8 checkouts, depending on the type of task and how fast they were. The duration of each subject’s test session was ~ 1 hour long, and the subjects were allowed breaks between each site tested.
During the test sessions the subjects experienced 2,700+ usability issues specifically related to the checkout flow and design. All of these findings have been analyzed and distilled into 134 specific usability guidelines on how to design and structure the best possible performing checkout flow.
Since there will always be contextual site differences, the aim of this study is not to arrive at statistical conclusions of whether 61% or 62.3% of your users will encounter a specific issue. The aim is rather to examine the full breadth of the user’s checkout behavior, and present the issues which are most likely to cause checkout abandonments. And as importantly, to present the solutions and checkout design patterns that during testing were verified to cause a high performing checkout flow.
The eye-tracking test study included 32 participants using a Tobii eye-tracker, with a moderator present in the lab during the test sessions (for task and technical related questions only) which took approx. 20-30 minutes. All eye-tracking test subjects tested 4 sites: Cabelas, REI, L.L. Bean, and AllPosters. The eye-tracking test sessions started by landing the test subjects at a product listing page, asking them to, for example, “find a pair of shoes you like in this list and buy it”.
The eye-tracking subjects were given the option to use either their personal information or a made-up ID handed on a slip of paper. Most opted for the made-up ID. Any personal information has been edited out of the screenshots used in this report or replaced with dummy data. The compensation given was up to $50 in cash.
The other part of this research study is a comprehensive usability benchmark. Using the 134 checkout usability guidelines from the large-scale usability tests as the review heuristics and scoring parameters, we’ve subsequently benchmarked the checkout flow at 50 top grossing US e-commerce sites. This has resulted in a benchmark database with more than 6,400 checkout usability parameters manually reviewed, 5,100+ additional examples for the 134 guidelines, and 380 checkout page examples from top retailers, each annotated with review notes.
The total UX performance score assigned to each benchmarked sites is essentially an expression of how good (or bad) a checkout user experience a first-time user will have at the e-commerce site – based on the 134 guidelines documented in the Checkout Usability report.
The specific score is calculated using a weighted multi-parameter algorithm:
Below is a brief description of the main elements in the algorithm:
All site reviews were conducted by Baymard employees in Q2 2016. A US-based IP address was used. In the case multiple local or language versions of a site existed, the US site version was used for the benchmark.
All reviews were conducted as a new customer would experience them - hence no existing accounts or browsing history were used. The shortest path through the checkout was always the one benchmarked (e.g. the “guest checkout” option). The documented and benchmarked designs at each site were: shopping cart, account selection, shipping address, shipping methods, billing address, payment methods, order review, order confirmation steps, along with order confirmation email, gifting flows, and validation error experiences.
The quantitative study component is in the form a 4 quantitative studies or tests. The studies each sought answers on:
Besides these 4 main sources, select test observations and sessions from our other usability studies are included, primarily from Baymard’s Mobile E-Commerce Usability study.
Baymard Institute provide this information “as is”. It is based on the reviewers subjective judgement of each site at the time of testing and in relation to the documented guidelines. Baymard Institute can’t be held responsible for any kind of usage or correctness of the provided information.
The screenshots used may contain images and artwork that are both copyright and trademark protected by their respective owners. Baymard Institute does not claim to have ownership of the artwork that might be featured within these screenshots, and solely capture and store the website screenshots in order to provide constructive review and feedback within the topic of web design and web usability.
Citations, images, and paraphrasing may only be published elsewhere in limited extend, and only if crediting “Checkout Usability study by Baymard Institute, baymard.com/research/checkout-usability”