Our 2+ year Mobile UX Study reveals that, since our first Mobile Study in 2013, the mobile user experience has generally improved for users.
Yet that isn’t to say that the mobile sites overall perform well for users. Testing also revealed both high-level issues users face, as well as more granular issues — all of which can, singly or in combination, lead to abandonment.
Indeed, during the 289 test user/site sessions we conducted for this study (think-aloud 1:1 moderated lab testing), the users encountered 2,597 mobile UX and usability issues — despite testing the mobile sites from leading online brands.
Our Mobile Benchmark Database contains 12,000+ mobile site elements that have been manually reviewed and scored by Baymard’s team of UX researchers (embedded below), with an additional 9,000+ best- and worst-practice examples from the top-grossing e-commerce sites in the US and Europe (performance verified).
In this article we’ll analyze this dataset to provide you with the current state of mobile UX, and outline 18 common design pitfalls and strategic oversights applicable to most mobile e-commerce sites.
For this analysis we’ve summarized the 12,000+ Mobile Usability Scores across 31 topics and plotted the 58 benchmarked mobile sites across these in the scatterplot above. Each dot, therefore, represents the summarized UX score of one site, across the guidelines within that respective topic of the mobile e-commerce experience.
The current mobile e-commerce UX performance is listed in the first row. The following rows are the UX performance breakdowns within 31 topics that constitute the overall mobile e-commerce performance.
The mobile e-commerce UX performance for the average top-grossing US and European e-commerce site is “mediocre”, with none of the 58 sites benchmarked having a “good” UX implementation and performance. This leaves nearly all sites in a tight cluster of 52% “mediocre” (or worse), and 48% “acceptable”.
That said, while there aren’t any standout performances, there are also very few “poor” or outright “broken” experiences.
In comparison, in most of the other e-commerce UX studies we’ve conducted at Baymard Institute the average UX performance also amounts to “mediocre”, but also tends to have a wider spread of variation and performance scores (see our overall UX benchmark).
The mobile UX benchmark shows that there’s ample room for improvements when looking within the specific topics of the mobile user experience — in particular the UX within Mobile Homepage, Mobile On-Site Search, Mobile Forms, and Mobile Sitewide Features & Elements.
These topics describe issues that many sites have, and also include some “missed opportunities” for the e-commerce industry as a whole.
Also, note that this is an analysis of the average performance across 58 top-grossing US and European e-commerce sites.
When analyzing a specific site there are nearly always a handful of critical UX issues, along with a larger collection of worthwhile improvements. This is the case even when we conduct UX audits for Fortune 500 companies.
In the following we’ll provide a more detailed walkthrough of the UX performance and competitive landscape within 4 topics of Mobile E-Commerce UX, along with “missed opportunities” to be extra alert to. As many issues identified on desktop sites tend to carry over to the mobile site as well, we will focus on issues observed in our research to be either unique or specific to the mobile experience, and point out additional platform agnostic areas of importance.
In particular, we’ll discuss 18 general pitfalls to be aware of for 4 of the 31 subtopics of Mobile UX:
These subtopics were chosen as they are the most interesting or the most suitable for discussion in an article.
Within Mobile Homepage, the average site performs between “mediocre” and “acceptable”.
Behind that average is a widespread distribution of performances, with 50% of sites performing either “mediocre” or “poorly”.
In particular, there are 2 issues sites get wrong when it comes to the Mobile Homepage.
Placing ads in primary areas of the mobile homepage — an issue at 95% of mobile sites, compared to only 59% of desktop sites — causes overview, distraction, and interaction issues. On mobile sites, these issues are worse than on desktop, as mobile users must navigate within the much smaller mobile viewport.
Therefore, if deciding to include ads on the homepage on a mobile site (which of course isn’t required), ensure the ads aren’t overly prominent (e.g., don’t take up more than half of the vertical space of the viewport) or too visually distracting (e.g., opt for simpler imagery and a line or two of text — avoid animations or other highly distracting content), and minimize the chance for interaction issues (e.g., by avoiding overlays that users must tap to close).
During testing, some users who tapped on featured paths on the homepage wound up in much narrower scopes then they had intended to — and some never realized it. (Note: “scope” here refers to a featured subcategory or promoted filter — for example, “Women’s Shirts” or “Women’s New Arrivals”.) This led directly to their decision to abandon the site, as they assumed the site simply didn’t have a wide selection of products or the specific products they were looking for.
On mobile sites users are at risk of developing tunnel vision, as the small viewport limits their ability to gain an overview of their current location. While promoting scopes on the homepage can help many users access highly relevant product lists, a subgroup of users will completely lose all context of where they are in the site hierarchy if the scope links are unclear, leading them to severely misinterpret what products are actually available.
To mitigate this issue, there are two options:
The average mobile site performs “mediocrely” when it comes to Mobile On-Site Search, with 34% having outright “broken” experiences, and only 40% rating “acceptably” or higher.
Though not radically different from the desktop performance, the heavier mobile trend towards “poor” and “broken” indicates that some search features and tools are being copied over from desktop sites without special consideration, or are being removed at the expense of usability.
While Search Query Types make up the backbone of the on-site search experience, the mobile findings in that subarea are nearly identical to the desktop experience, as the search engine is unlikely to deliver different results on the two platforms.
However, some unique differences and issues exist elsewhere within Mobile Search.
In particular, there are 8 issues sites get wrong when it comes to the Mobile Search.
During testing, nearly all users relied on the guidance of autocomplete suggestions at some point when devising queries, but those suggestions often failed users if queries contained even the slightest spelling error (e.g., searching “furnture” instead of “furniture”).
Search queries typed with less-than 100% accuracy were common during mobile testing, and mobile keyboard use was observed to be especially error-prone. Yet users’ misspelled queries were frequently met with autocomplete suggestions that were irrelevant or that disappeared once an error was detected, effectively removing the very guidance that the suggestions are intended to provide.
Since autocomplete plays a key role in early search interactions, unexpected suggestions due to minor typos can cause users to change their product-finding strategies by seeking other browsing methods or reworking queries, and, in the worst cases, contribute to abandonment downstream if alternate product-finding strategies don’t quickly lead to relevant results.
Since spelling errors in search queries do occur with significant frequency, autocomplete’s relevance can be enhanced by mapping misspelled words to meaningful autocomplete suggestions. There are existing spell-check solutions (many of them freely available online), which means common misspellings should be relatively cheap to catch.
However, depending on the search engine and autocomplete implementation, it may not be feasible to integrate an off-the-shelf solution. Additionally, misspellings of brand names or highly specialized products may be difficult to catch. Depending on the search engine and autocomplete implementation, careful monitoring of autocomplete query logs or search logs, or both, should shed light on misspelled queries that users enter into the search field, which can be a good starting point for analysis and prioritization of improvement efforts for autocomplete spelling suggestions.
Applying a search category scope, such as a user seeking “Pots in Gardening” vs. “Pots in Kitchen”, is not a natural part of most users’ thought process — rather, they’re thinking of the type of product they want and trying to come up with terms that may prove well-suited for producing such results.
However, once users are exposed to category scope suggestions, they can be a useful way to preselect a narrower and more relevant list of products, instead of conducting a sitewide search.
Without category scope suggestions or when category scopes are not obvious, users can select sitewide query suggestions that span many categories and end up arriving at an overwhelming number of results.
The overall goal of category scope suggestions in autocomplete is to help users restrict searches to a smaller subset of relevant results in advance. When well implemented, category scope suggestions help users avoid having to wade through excessive and irrelevant results, ultimately saving time and helping them hone in on the most relevant results more quickly.
During testing, it became clear that users expected any autocomplete option to be a query suggestion, especially if product suggestions weren’t uniquely styled and only represented with text.
Of the sites tested that emphasized product suggestions, many of the users either rejected the autocomplete options as irrelevant or were disoriented after their selection sent them directly to a product details page.
When product suggestions aren’t visually distinct from query suggestions, it creates a jarring experience, since most users are expecting to explore products via the search results but instead wind up on a single product page.
Ensuring an autocomplete product suggestion is visually distinguished from query suggestions is near-unanimous on desktop (97%), but is either missing or designed incorrectly for 40% of mobile sites. This shows that downgrading the autocomplete tool to identically styled suggestions for mobile is a common strategy for sites — but a bad experience for mobile users.
During testing, users often went through multiple iterations of search queries (2.2 iterations on average).
For example, changing an initial query of “dresses” to “red dresses” after scanning the search results.
Clearing mobile users’ search queries each time they are submitted (which fewer — 31% — desktop sites do) makes it cumbersome to use the search feature, as it requires users to retype queries all over for each iteration. As a result, it takes longer for users to use search to find products or other content, and may nudge some users to abandon using search as a product-finding strategy.
Moreover, on mobile users already have to grapple with small tap target sizes and numerous taps to backspace-delete a word (or words) before typing any new characters — both of which are already intricate interactions. For users attempting to revise query text in the middle or beginning of search fields, the action of tapping into the precise position in the search field can make iterating a query even more tedious.
Further, these factors do not take into account the length of time it takes to type on mobile to begin with — which is typically longer than it takes on a physical keyboard. All of these aspects combined show the difficulties inherent in mobile typing.
On the other hand, during testing, when search queries were persisted, users made swift iterations by adding or removing a word or two from their original query, avoiding the “halt and retype” behavior that was necessary on sites that didn’t persist search queries. Moreover, persisting the query helps relieve some of the strain on providing “perfect” filtering options for any given search query (as users can rapidly iterate their query instead, and “filter by searching”).
Unless a site is narrowly targeted at users with a very high level of domain knowledge, many users will often use terminology that differs from the site’s. Obviously, the search engine being able to handle synonyms is a great start, yet there are cases where synonyms tend to be insufficient since the user’s terms are approximations or the user is searching for neighboring concepts.
Without clear suggestions for related and adjacent queries, some users will miss out on relevant search results and fail to find a suitable product.
On the other hand, testing reveals that exposing users to alternate queries, which are relevant to their original search but broad enough in scope to return quality results, gives users who might otherwise reach an impasse valid and reliable options to explore.
Alternate queries may point the user toward another (related) set of products, recommend the removal of a model name or brand from an overly specific query, or suggest searches for associated and compatible products.
In all cases, alternate queries help users recover from suboptimal search results by shifting or broadening the scope of their search.
When users search for products, they frequently query on terms that either directly map or strongly relate to a scope or category — for example, searching for “laptops” at an electronics site with a “Laptops” subcategory.
Category-specific pages and results often feature benefits that standard search results pages lack, including clear subcategory navigation, contextual product filters, and links to relevant content such as product guides or finders.
However, on many sites, search users experience a different, subpar product results listing compared to users who navigate to the same scope or category using the global navigation.
Autodirecting users to categories or subcategories when there’s a 1:1 match with the user’s query, or guiding users on the results page to likely relevant categories or subcategories if there isn’t, will in the end make it easier for users to navigate search results and find products they’re looking for.
The average Mobile Forms performance is “poor”, with only 22% of sites performing “acceptably” or higher (with no “state of the art” examples), and 33% of sites being outright “broken”.
This indicates how much friction occurs whenever users have to fill out information, and how error prone the mobile experience is. Thus, it is crucial that every field is necessary and optimized for user interaction, and that, when issues occur, feedback provided to users is clear.
In particular, there are 6 issues sites get wrong when it comes to the Mobile Forms.
When users are filling out forms, the mobile keyboard severely limits the actual viewable area.
For instance, on an iPhone 11 Pro Max in portrait mode, only 56% of the screen is left for the form the user is filling out. This limits users’ overview of forms and can make it challenging to retain a sense of context while checking out, especially when the form is very long and has poor labeling of sections and fields.
Furthermore, during testing we observed a subgroup of users who preferred to use landscape mode to fill out forms, as the keyboard in landscape mode is significantly larger compared to the keyboard in portrait mode — for example, the hit area of the numeral input keys is 399% larger on the numeric keyboard compared to the standard touch keyboard on an iPhone 11 Pro Max (390 x 147px vs. 105 x 137px). Yet in landscape mode even less of the screen is available to view the form — the amount of the form that’s visible in landscape mode decreases by 73% compared to portrait mode (on an iPhone 11 Pro Max, with the browser visible as well). It’s thus even harder for users to retain an overview of the form when filling it out in landscape mode — which can lead to severe misinterpretations and inputting of incorrect data.
This issue of loss of context has been persistent since our first rounds of mobile testing back in 2013.
To solve the issue of the field label being out of view, sites should dynamically change the preferred label position from above the field in portrait mode to the left of the field in landscape mode (that is, placing the label at the same line as the field).
This gives users a little more overview of the form, which can allow for, for example, an additional field being displayed, or a section header. This additional bit of context can be crucial for users and reduce the number of errors they have or incorrect inputs they enter.
During checkout testing, users experiencing errors was a common occurrence.
While errors are more or less inevitable for at least some of a site’s users, what’s key is the user’s error-recovery experience.
The first step in users being able to resolve an error and proceed with their purchase is understanding that an error has even occurred, and what input or inputs caused the error. Requiring users to hunt down the fields themselves not only leads to user frustration but was also observed to lead to checkout abandonment, as users were unable to resolve the errors.
Notably, only 24% of desktop sites get this wrong, compared to 69% of mobile sites, where it’s even more difficult to recover from errors due to the inherent issues of shopping on mobile devices.
Our testing has revealed some very consistent patterns for how well-performing error messages should be positioned and styled.
Under all circumstances, the incorrect field must always be marked up, typically by using red field borders, a red field background color, or red arrows. This will immediately grab a user’s attention, and is the conventional styling for erroneous form fields. Additionally, the error message must always be displayed right next to the erroneous field to allow users to understand what went wrong and how to correct it.
However, the exact implementation depends on how many errors there are on the page. If there is only one error on a page, autoscroll can be utilized in order to present the error to the user right within their viewport.
But when there are multiple errors on pages taller than one viewport, it becomes a little more complicated. We see during testing that simply scrolling users to the first error performs poorly, as it makes them likely to overlook the subsequent errors. A better-performing technique is to take users to the top of the page and inject an error statement outlining the multiple errors that have been detected, and potentially what they are. This is of course in addition to then highlighting each of the fields throughout the page, each with their own unique error message.
Inaccurate addresses cause multiple, cascading issues.
Users may have problems receiving their orders, don’t receive them at all, or don’t receive them on time.
Sites have to provide extensive customer support when there are delivery issues, and often face broken customer experiences and the consequent negative site reviews, with the end result being lost a sale due to returned undelivered order.
An address validator functions by querying an address database (e.g., the USPS) to ensure that the address the user typed matches the address the postal service has on file. While not perfect, they do allow sites to perform a quick check of a user’s typed address.
Note that while the low frequency of the issue makes an address validator a less-crucial feature for desktop e-commerce sites, mobile sites will always require an address validator.
During mobile testing we’ve found that, due to keyboard autocorrect and the difficulty of typing on small touch keyboard, users make errors far more frequently when entering their address on mobile devices. Furthermore, users on mobile devices have more difficulties noticing errors due to the lack of page overview caused by the small screen.
Users are often intimidated when seeing a long page filled with form fields and selections.
During testing, many users felt overwhelmed when they were presented with a screen that displayed 10–15 form fields or more within the same viewport — a feeling that was observed to be exacerbated on mobile devices.
While checkout flows are by their nature form-heavy pages, smart form features and designs can greatly minimize how intimidating the checkout steps appear to users.
Ideally, an entire checkout flow can be as short as 7 form fields, 2 checkboxes, 2 drop-downs, and 1 radio button interface, shown by default. Shorter checkout forms are less intimidating, and therefore users are more likely to complete them.
What information users are required to provide during a checkout flow is highly inconsistent across sites.
For example, some sites require users’ phone numbers, others don’t. Some require a cardholder name, others don’t, etc.
Therefore, most users have few preset expectations on what information is required and what may be optional.
Explicitly marking both required and optional fields provides users with the information they need to quickly move through a form.
By explicitly denoting both optional and required fields, the user isn’t forced to infer anything and can stay focused on just the field they are filling out.
Users are consequently able to progress more efficiently through the entire form, field by field, as they don’t have to perform any back and forth scanning of previous fields.
By changing an attribute or two in the code of the input fields, you can instruct a user’s phone to automatically show a specific type of keyboard that is optimized for the requested input.
For example, you can invoke a numeric keyboard for the credit card field, a phone keyboard for the user’s phone number, and an email keyboard for their email address.
This saves the user from having to switch from the standard keyboard layout and, in the case of numeric inputs, minimizes typos, as these specialized keyboards have much larger keys that reduce the chance of accidental taps.
Technically there are a few different ways to invoke the numeric keyboard layouts, and there are also slight distinctions between those keyboard layouts, with slightly different behaviors across platforms (iOS, Android, etc.).
In general, there are two HTML attributes that will invoke numeric keyboard layouts, namely the
For example, for any numeric fields use
<input type="text" inputmode="decimal" pattern="[0-9]*" novalidate autocorrect="off" />
For a complete list of field and code combinations for all field types commonly found in a checkout flow, see baymard.com/labs/touch-keyboard-types.
The average site’s Mobile Sitewide Features and Elements performance is “acceptable”, with a wide spread and 57% of sites performing “acceptably” or higher.
That said, there are some common mobile-specific issues observed on a majority of sites that warrant attention.
In particular, there are 4 issues sites get wrong when it comes to Mobile Sitewide Features and Elements.
When there are no load indicators, users on mobile devices tend to very quickly assume that whatever action they’ve just attempted (e.g., tapping on a list item in the product list) was not registered, and they tap again, which often leads to tapping inadvertently on some other content or simply starts the loading process over again.
Consequences of not having load indicators can be minor — users tend to quickly recover after, for example, tapping on unintended content — but they are cumulative.
During testing, users were observed to have multiple issues related to missing load indicators while on the same site — a consistent drag throughout the user’s entire browsing experience.
Therefore, always provide high-contrast load indicators whenever new content is loading.
Moreover, to ensure the load indicators perform well, it’s also important to display the load indicator immediately after a user’s action (< 1 second), use a conventional design, and update the load indicator after 10 seconds.
The issue of spacing between tappable elements is closely related to the size of tappable elements (see below).
Both issues often combine to make it difficult for the end user to reliably navigate the mobile interface (yet it should be noted that the two issues of sizing and spacing are unique and can occur separately).
In short, inadequate spacing between elements will lead to mistaps, unintended detours, and even abandonments. Furthermore, inadequate spacing has been a persistent issue during all our mobile testing; it has been observed extensively ever since our first mobile testing in 2013, and remains an issue even today.
So what is adequate spacing? Some device manufacturer’s design guidelines stipulate a minimum spacing of 2mm and Baymard’s testing supports the same general recommendation.
However, in cases where the consequence of unintentionally tapping an element due to spacing issues are graver, spacing should be much higher (~10 mm). Finally, elements should never be placed right at the very edge of the screen, as that area is typically unresponsive and users will thus have difficulty selecting those elements.
Despite hit area sizing being a “basic” for mobile design, we time and again observe sites implementing elements and links with overly small hit areas.
Again, just as is the case with inadequate spacing of elements, the disruption to the user can range from mild annoyance at having to tap multiple times before they hit the right spot, to severe frustration and abandonment if they mistap and end up in another area of the site or lose data (e.g., during checkout).
Yet the solution to these issues is straightforward: have at least a hit area of 7mm x 7mm (measured on the smartphone display).
In testing, we often observe that users need information about the site’s return policy or shipping options before they are able to make a purchase decision.
While this information can (and should) be available through multiple paths — for example, via the site header, on the product page, or via sitewide search — testing revealed that a subgroup of users consistently will head to the footer when seeking such information.
When it’s not there, users must go “information hunting”, which, depending on how easy it is to find the information elsewhere on the site, results in an often substantial delay in product browsing.
More seriously, if users have difficulty quickly accessing basic returns and shipping information, some may reconsider whether they want to use the site to make their purchase.
Providing links to the return policy and shipping information in the footer is a small and simple implementation that can greatly help users seeking information specific to their order.
This high-level analysis of the current state of Mobile UX focuses on only 4 of the 31 Mobile subtopics included in our Benchmark Analysis. The 27 other subtopics should be reviewed as well to gain a comprehensive understanding of the current state of Mobile UX, and to identify additional site-specific issues not covered here.
Although our benchmark has revealed that no sites have a completely broken Mobile UX, it’s clear that there’s much room for improvement, as 52% of sites perform “mediocrely” or worse, while no sites have a “state of the art” Mobile experience.
Avoiding the 18 pitfalls described in this article is the first step toward improving users’ mobile experience:
For inspiration on other sites’ implementations and to see how they perform UX-wise, head to the publicly available part of the Mobile benchmark. Here you can browse the Mobile implementations of all 58 benchmarked sites.
For additional inspiration consider clicking through the Mobile Page Designs, as these showcase Mobile implementations at the top-58 US and European e-commerce sites and can be a good resource when considering redesigning a Mobile site — of what to emulate, but also of what to avoid.
This article presents the research findings from just 1 of the 580+ UX guidelines in Baymard Premium – get full access to learn how to create a “State of the Art” mobile e-commerce user experience.
Join 25,000+ readers and get Baymard’s research articles by RSS feed or
Topics include user experience, web design, and e-commerce
Articles are always delivered ad-free and in their full length
1-click unsubscribe at any time
© 2021 Baymard Institute US: +1 (315) 216-7151 EU: +45 3696 9567 email@example.com