As a retailer, are you confused, or alarmed, by some claims regarding site search speed? You may have found that certain claims sound too good to be true. As a company in the site search business, with a very fast product, we are amazed by some of the claims we see. Claims as low as 7ms to produce results are bound to confuse many retailers. A claim of 7ms for search speed is not necessarily wrong, but does it consider the full search experience?
This blog shares the results of my tests and explains how site search speed is accurately determined.
Retrieval vs. Rendering
[tweetable]What matters to merchants is the actual time it takes for a shopper to see search results[/tweetable].
There are at least three parameters that affect the time a search solution takes to display results.
- Speed of the shopper’s internet connection, which the merchant has no control over.
- Optimized cloud infrastructure. Search solution providers are able to tune their infrastructure to maximize performance. This may include faster DNS lookups, latency based or geolocation based routing of search requests, and the time a request takes on the search solution provider’s server(s) to prepare the results to be sent back to the shopper.
- Product information ultimately displayed on the shopper’s screen.
It is very important to note that [tweetable]the time a search engine takes to find the right products on a server for the shopper’s query is only a portion of the total search equation[/tweetable] (point 2 above). The time a server takes to prepare results is highly dependent on the amount of information that needs to be retrieved. For example, it will take far less time to report the number of products matching the query than to fetch and deliver details such as name, price, URL, and other facets for each individual product (point 3 above). And ultimately, delivery of results is highly dependent on the internet connection used for receiving this data (point 1 above).
In my investigation, I tested Ajax calls with several search tools (Klevu and others), and the same Ajax calls using “Firebug,” a tool to accurately and objectively test network usage and query performance. Testing with Firebug includes preparing the query, firing the query to the server, and collecting results back in properly ranked order. We must add the time to render (display) the results on the screen after they have arrived on the shopper’s device. The time spent rendering is typically not part of the equation in search speed reported by third party search engines or tools like Firebug.
For my tests, I typed just one letter and waited for the Ajax call to complete. Search speed as low as 7ms was reported by a third party, but Firebug reported that the time to complete the same Ajax call was 436ms. I fired more queries and noted down the numbers. The time taken by the Ajax calls as reported by Firebug varied from 75ms to 520ms.
When we see vast differences like this, the variances in results are explained by what is actually being reported. The super-fast results of 7ms are based on the initial backend call only (i.e. the time the servers take to produce results), while the seemingly slower times also include the time taken to transmit the query to the server(s) and then deliver and render results on the shopper’s device.
As we see, calculating the speed of site search is not based on a single standard and “site search speed” doesn’t mean the same thing to everyone.
[tweetable]Garbage Delivered Fast is Still Garbage[/tweetable]
Although every millisecond spent on retrieval and rendering is meaningful, relevancy must be added to the equation. Delivering irrelevant results is meaningless and irritating to the shopper, no matter how fast those results appear.
Popular search technologies like Solr and Elasticsearch, and many other third party tools, follow the principle of garbage-in-garbage-out and rely on the quality of the product catalog (i.e. product data) to deliver results. If the catalog does not include the exact words that shoppers use in their queries, these technologies can’t help.The shoppers will see “no results found” or they will receive a list of products only related to their query by a specific word. For example, if the shopper searches for “blue sneakers,” but the word “sneakers” is not in the product catalog, the shopper will probably get a host of products with the word “blue” in the description, including a lot of products unrelated to blue sneakers. Results like this leave shoppers frustrated and often cause them to abandon their search and the site.
Klevu picks up where these technologies stop. Klevu uses natural language and self-learning search to find products. When a catalog is submitted to Klevu, we enrich the catalog by adding a vast library of contextually relevant terms from our knowledge base. Klevu also learns from shopper search behavior, constantly updating our knowledge base with this data. Because of this, with Klevu, shoppers are able to use words they are comfortable with, and the results retrieved are relevant and accurate, regardless of whether there’s an exact match with data in the product catalog.
Speed + Relevancy = Sales
In a nutshell, speed is a very important factor for site search, but make sure quoted results represent the complete picture. Relevant and complete results are of equal, if not greater, importance than speed. [tweetable]Fast search operations coupled with relevant results means higher conversions, average order value, and customer satisfaction[/tweetable].
*Images captioned in this blog are not mine. They are used to illustrate the underlying topic or context. Courtesy: Google images.