Serving autocomplete suggestions fast!

Posted on October 9, 2013 by in Latest Articles

AutocompleteAutocomplete (also known as live suggestions or search suggestions) is very popular with Search applications. It is generally used to return either query suggestion (à la Google Autocomplete) or to propose existing search results (à la Facebook).

Open source search platforms like Solr and Elasticsearch support this feature. Both allow for implementing autocomplete using edge n-grams. N-grams are subsets of words. Edge n-grams are subsets from one edge of a word (generally the beginning). For example, the edge n-grams of the word search are s, se, sea, sear, searc and search. The general principle when using them to support autocomplete is to index all those n-grams in the search index with the original word stored as is. When the user starts typing a word, we send what is typed so far as a query to the index containing the n-grams. For example, if we send the query sea, the index should return the word search as sea is an n-gram of this stored word.

This technique is very flexible as it allows for easy implementation of interesting functionalities, such as fuzzy search and infix matching. You can already find detailed instructions on how to implement the edge n-grams technique around the Web. For example, you can read this popular tutorial by Jay Hill to implement it in Solr, and this one by Jon Tai for Elasticsearch.

The best we can do?

The main drawback of this technique is that it is using an index to hold the n-grams. A Lucene index is not ideal for this task as it will have to look for a lot of terms before retrieving the n-grams. As the index grows bigger (all those n-grams add up!), we might experience performance issues. Users expect autocomplete to be fast!

A better approach is to have a dedicated and optimized structure to provide those autocomplete suggestions from a given input. This is what Lucene provides in the suggest module. The Lookup abstract class is a simple one that has a lookup method to return suggestions from an input. In this module, Lucene provides several classes extending Lookup. Those classes use one of two structures to hold the suggestions: a ternary search tree (TST) or a finite state automata (FST). But all those classes share an interesting characteristic: the structure is held entirely in memory, making the lookup very fast!

To build the structure, we have to set a source for suggestions. In general, this will either be a simple text file or the indexed terms of a field from a Lucene index. We then call the build method to clear the in-memory structure and populate it with all the items from the source. One shortcoming of this design is that there is no way to add a new item in the structure without rebuilding it completely. But there is a mechanism to store the in-memory structure to disk for fast reloading.

As said previously, there are several Lookup classes available using either a TST or a FST structure. One of these classes is very interesting: AnalyzingSuggester. This one uses an FST structure. Unlike the other Lookup classes, which only store the suggestions as is, AnalyzingSuggester allows us to use an analyzer to modify the suggestions before they are stored in the structure (the un-analyzed form is stored as well). The same analyzer is used when doing a lookup in the structure. Also, it returns suggestions in their original (un-analyzed) form. This gives us a lot of flexibility, similar to what we have when storing the suggestions in an index, but with potentially better performance. You can read more about AnalyzingSuggester in this post that describes some interesting usage.

Since AnalyzingSuggester was created, FuzzySuggester was added, which extends AnalyzingSuggester and adds the possibility of doing fuzzy matching of suggestions.

But most of the time, we don’t deal directly with Lucene classes. Let’s examine how these classes are used with Solr and Elasticsearch.

Solr

To use those Lookup classes for autocompletion, Solr offers the Suggester component. The Suggester is based on the SpellCheckComponent, but the Suggester can be used for more than spellchecking. It allows us to specify a Lookup class to provide the suggestions. Let’s see how to use the Suggester with the Lookup class AnalyzingSuggester.

First, we must configure a search component to use the Suggester:

Here, we defined a Suggester component to use AnalyzingSuggester. Notice that we don’t directly specify AnalyzingSuggester, but instead a factory that Solr offers to instantiate it. In the early days of the Suggester, we could directly specify some Lookup classes, but for the latest ones, like AnalyzingSuggester, we must use the factory.

Because we are using AnalyzingSuggester, we must specify an analyzer that will be used when storing and doing lookup in the structure. We can define this analyzer using suggestAnalyzerFieldType. In my case, I use the lowercase analyzer that simply applies lowercasing for each suggestion. But as explained, AnalyzingSuggester is really interesting when used with analyzers tailored to provide the best autocomplete experience to your users.

The storeDir setting allows you to specify a directory (within the data directory of the index), where the Lookup structure will be persisted every time it is built using spellcheck.build=true. When we reload the Solr index, the structure can be loaded back in memory from there. This is not mandatory. If we don’t use this setting, we will have to manually build the structure every time we reload the index (still with spellcheck.build=true).

As we will see later, we use spellcheck.q when querying Solr for suggestions. Whatever Lookup class is being used, the Suggester component will apply the analyzer defined for fieldType to the input provided before passing it to the Lookup structure. When using a file-based dictionary (as we are in this example), if fieldType is not set, Solr will default to a whitespace delimiter, which might not be what you want. For example, if we send the input new y, the default behavior would be to split the input into the tokens new and y and send each of those separately to the Lookup structure, thus returning suggestions for both tokens. This might not be what you expect. If you would rather have suggestions for the whole input, you must configure the Suggester to use the analyser of a field that does not split it, as we did in the example by setting the string field type.

In the example, we use a file-based dictionary. The format is really simple. Each line of the file should contain a suggestion and a weight for this suggestion (separated by a tab character). The weight will be used by the Lookup classes to sort the suggestions that match the input. Note that not all the Lookup classes can use the weight, but AnalyzingSuggester can.

Building from an index field

Instead of a file-based dictionary, the source of the suggestions can also be a field of the Solr index. Here is another example using a field:

Notice now that we don’t use sourceLocation anymore. Instead we use field. The source of suggestions will thus be all the indexed values of this field. We still specified fieldType to indicate how to parse the input. But this time, if we had not set this value, the default analyzer to split the input would have been the one from field (specifically, its query analyzer). This differs from the whitespace splitting that is applied by default when using the file-based dictionary.

Also, be aware of the analyzer that you use to index the values. When using AnalyzingSuggester, it might be best to set a field that does not do any analyzing on the values (like string). Between the analyzer used to parse the input, the analyzer of the field and the analyzer used by AnalyzingSuggester, things can get confusing pretty quickly!

With a field-based source, we have two other settings to consider. First, threshold, according to the Solr wiki, represents “the minimum fraction of documents (of the total) where a term should appear, in order to be added to the lookup dictionary.” This is to prevent infrequent terms from being added to the list of suggestions. Set it to 0 to keep them all. This setting is not used with a file-based dictionary because it is expected that the list only contains suggestions that we want, which makes sense.

Finally, buildOnCommit indicates that we want to rebuild the Lookup structure each time there is a commit. This will add the newer terms from the field into the list of suggestions at each commit. In the case of the file-based dictionary, we did not set this value because it makes no sense to rebuild it at each commit since the suggestions from the file don’t change.

Now that we have configured the search component, we can configure the request handler that will be used to serve the suggestions. Here is how:

This will activate the Suggester for the /suggest handler. The handler will return at maximum 5 suggestions. They will be sorted by “popularity.” In the case of a file-based dictionary, the popularity will be the weight of the suggestions. For a field-based dictionary, it is the frequency of the terms that will make them more popular.

We can now query our handler to get suggestions. For example, using this query:

You will then receive suggestions to complete this query, for example, “new york” or “new york times”, depending on what you have in your list of suggestions.

Elasticsearch

Recently, Elasticsearch added a new feature called completion suggester. This new feature uses AnalyzingSuggester to provide in-memory suggestions lookup like Solr’s Suggester. But Elasticsearch’s completion suggester uses an interesting approach to persist the suggestions. Instead of storing them only in memory, they are also stored directly in an index using a custom posting format. The structure is built at the same time the index is built and each Lucene segment will hold part of it, making it much easier to scale across multiple servers. The in-memory structure is automatically refreshed during indexing, so it’s always up to date with the latest suggestions. You can use your main index to store the suggestions or use a dedicated one.

Elasticsearch’s completion suggester supports fuzzy search, weights and the possibility of analyzing the suggestions (since it uses AnalyzingSuggester), but it also adds the possibility to have multiple inputs for the same suggestion and support for payloads. For some examples on how to use it, refer to the documentation. This feature is pretty new, so expect some changes happening in the near future.

Conclusion

In-memory structures allow for fast autocomplete implementation! Even if the edge n-grams technique is good enough in many cases, depending on multiple of factors (the number of suggestions, how often new suggestions are added, server’s memory, etc.), using an in-memory structure might be better. As always, test with your data and your environment before making a decision!

Pascal works as a developer and search specialist at Norconex. He worked with Lucene, Solr and Elasticsearch for more than 5 years, as well of different Java technologies for the last 10 years.


Comments

  • Very nice post.

    Did you plan to add a test of Algolia (www.algolia.com) in your test ?

    I am the the CTO of Algolia and it might interest you since this is an instant search engine, you have a very fast autocompletion without any specific behavior like ngram (http://blog.algolia.com/full-text-search-in-your-database-algolia-versus-elasticsearch/).

    To give you some technical details, we’ve replaced the traditional approach of using a dictionary and a set of inverted lists in two different data structures, by using one unique data structure. This enabled us to do several optimizations. For example, we only index some word prefixes and not all word ngrams. This approach leads to an index about 5-10% bigger than if it contained only words, far less than indexing all n-grams. This also reduces a lot the cost of doing a fuzzy prefix match in the data-structure using a Levenshtein Distance (since there is less entries).

    • Pascal Dimassimo

      Thanks Julien! I did not know Algolia, but I’ll take a look at it…

  • Charlie Hull

    Great post – here’s an enhancement to autosuggest we implemented for one of our clients, with images showing suggested products: http://www.brideandgroomdirect.co.uk/ – try typing ‘cards’ in the search box

    • Pascal Dimassimo

      Thanks Charlie! I tried the site you mentioned and the images add a very nice touch to autocomple!

  • Naresh

    Great Article – I followed your article its working perfectly, I i want some more information about my problem please take a look at this. http://stackoverflow.com/questions/22196793/how-get-suggestions-from-solr-server-in-a-php-variable

  • jox

    “But there is a mechanism to store the in-memory structure to disk for fast reloading.” – I can’t use such mechanism. Please check this: http://stackoverflow.com/questions/26685340/how-to-persist-a-suggester-in-lucene

  • Sorin Boistean

    Hi Pascal, thank you so much for this amazing article – quick question, when should I fire this request to solr on each typed word or letter in the input, should I process it via javascript and ajax request? Thanks.

    • Pascal Dimassimo

      Yes, it is fine to call Solr each time a new character is input by a user. Usually, you should be set so that each ajax requests are made to your webapp which in turn will call Solr for each character. But if your Solr instance is accessible directly by your users, you could call it directly from ajax.

  • Hi, I’m trying to implement an Suggest as Google with elasticsearch, I followed the steps you mention and even tried to documenting elasticsearch and not get the result I want, this is the url configuration http://pastebin.com/9jkKgV6x I’m using, not if the problem is in how I configured the information or do not like the query to elasticsearch, could you help me? Elasticsearch I always returns all field data request and do not want that.