ArangoSearch Analyzers

    Analyzers can be used on their own to tokenize and normalize strings in AQLqueries with the .

    How Analyzers process values depends on their type and configuration.The configuration is comprised of type-specific properties and list of features.The features control the additional metadata to be generated to augment Viewindexes, to be able to rank results for instance.

    Analyzers can be managed via an HTTP API and througha .

    While most of the Analyzer functionality is geared towards text processing,there is no restriction to strings as input data type when using them throughViews – your documents could have attributes of any data type after all.

    Strings are processed according to the Analyzer, whereas other primitive datatypes (null, true, false, numbers) are added to the index unchanged.

    The elements of arrays are unpacked, processed and indexed individually,regardless of the level of nesting. That is, strings are processed by theconfigured Analyzer(s) and other primitive values are indexed as-is.

    Objects, including any nested objects, are indexed as sub-attributes.This applies to sub-objects as well as objects in arrays. Only primitive valuesare added to the index, arrays and objects can not be searched for.

    Also see:

    • SEARCH operation on how to query indexedvalues such as numbers and nested values
    • for details about howcompound data types (arrays, objects) get indexed

    Analyzer Names

    Each Analyzer has a name for identification with the followingnaming conventions:

    • The name must only consist of the letters a to z (both in lower andupper case), the numbers 0 to 9, underscore (_) and dash (-) symbols.This also means that any non-ASCII names are not allowed.
    • It must always start with a letter.
    • The maximum allowed length of a name is 254 bytes.
    • Analyzer names are case-sensitive.

    Custom Analyzers are stored per database, in a system collection _analyzers.The names get prefixed with the database name and two colons, e.g.myDB::customAnalyzer.This does not apply to the globally availablebuilt-in Analyzers, which are not stored in an_analyzers collection.

    Custom Analyzers stored in the _system database can be referenced in queriesagainst other databases by specifying the prefixed name, e.g._system::customGlobalAnalyzer. Analyzers stored in databases other than_system can not be accessed from within another database however.

    The currently implemented Analyzer types are:

    • identity: treat value as atom (no transformation)
    • delimiter: split into tokens at user-defined character
    • stem: apply stemming to the value as a whole
    • norm: apply normalization to the value as a whole
    • ngram: create n-grams from value with user-defined lengths
    • text: tokenize into words, optionally with stemming,normalization, stop-word filtering and edge n-gram generation

    Available normalizations are case conversion and accent removal(conversion of characters with diacritical marks to the base characters).

    Analyzer Properties

    The valid attributes/values for the properties are dependant on what _type_is used. For example, the delimiter type needs to know the desired delimitingcharacter(s), whereas the text type takes a locale, stop-words and more.

    An Analyzer applying the identity transformation, i.e. returning the inputunmodified.

    Delimiter

    An Analyzer capable of breaking up delimited text into tokens as per(without starting new records on newlines).

    The properties allowed for this Analyzer are an object with the followingattributes:

    • delimiter (string): the delimiting character(s)

    An Analyzer capable of stemming the text, treated as a single token,for supported languages.

    The properties allowed for this Analyzer are an object with the followingattributes:

    • locale (string): a locale in the formatlanguage[_COUNTRY][.encoding][@variant] (square brackets denote optionalparts), e.g. "de.utf-8" or "en_US.utf-8". Only UTF-8 encoding ismeaningful in ArangoDB.

    Norm

    An Analyzer capable of normalizing the text, treated as a singletoken, i.e. case conversion and accent removal.

    The properties allowed for this Analyzer are an object with the followingattributes:

    • locale (string): a locale in the formatlanguage[_COUNTRY][.encoding][@variant] (square brackets denote optionalparts), e.g. "de.utf-8" or "en_US.utf-8". Only UTF-8 encoding ismeaningful in ArangoDB.
    • accent (boolean, optional):
      • true to preserve accented characters (default)
      • false to convert accented characters to their base characters
    • case (string, optional):
      • "lower" to convert to all lower-case characters
      • "upper" to convert to all upper-case characters
      • "none" to not change character case (default)

    An Analyzer capable of producing n-grams from a specified input in a range ofmin..max (inclusive). Can optionally preserve the original input.

    This Analyzer type can be used to implement substring matching.Note that it slices the input based on bytes and not characters by default(streamType). The “binary” mode supports single-byte characters only;multi-byte UTF-8 characters raise an Invalid UTF-8 sequence query error.

    The properties allowed for this Analyzer are an object with the followingattributes:

    • min (number): unsigned integer for the minimum n-gram length
    • max (number): unsigned integer for the maximum n-gram length
    • preserveOriginal (boolean):
      • true to include the original value as well
      • false to produce the n-grams based on min and max only
    • startMarker (string, optional): this value will be prepended to n-gramswhich include the beginning of the input. Can be used for matching prefixes.Choose a character or sequence as marker which does not occur in the input.
    • endMarker (string, optional): this value will be appended to n-gramswhich include the end of the input. Can be used for matching suffixes.Choose a character or sequence as marker which does not occur in the input.
    • (string, optional): type of the input stream
      • "binary": one byte is considered as one character (default)
      • "utf8": one Unicode codepoint is treated as one character

    Examples

    With min = 4 and max = 5, the Analyzer will produce the followingn-grams for the input string "foobar":

    • "foob"
    • "fooba"
    • "foobar" (if preserveOriginal is enabled)
    • "ooba"
    • "oobar"
    • "obar"

    An input string "foo" will not produce any n-gram unless preserveOriginal_is enabled, because it is shorter than the _min length of 4.

    Above example but with startMarker = "^" and endMarker = "$" wouldproduce the following:

    • "^foob"
    • "^fooba"
    • "foobar$" (if preserveOriginal is enabled)
    • "ooba"
    • "oobar$"
    • "obar$"

    Text

    An Analyzer capable of breaking up strings into individual words while alsooptionally filtering out stop-words, extracting word stems, applyingcase conversion and accent removal.

    Stemming support is provided bySnowball.

    • locale (string): a locale in the formatlanguage[_COUNTRY][.encoding][@variant] (square brackets denote optionalparts), e.g. "de.utf-8" or "en_US.utf-8". Only UTF-8 encoding ismeaningful in ArangoDB.
    • accent (boolean, optional):
      • true to preserve accented characters
      • false to convert accented characters to their base characters (default)
    • case (string, optional):
      • "lower" to convert to all lower-case characters (default)
      • "upper" to convert to all upper-case characters
      • "none" to not change character case
    • stemming (boolean, optional):
      • true to apply stemming on returned words (default)
      • false to leave the tokenized words as-is
    • edgeNgram (object, optional): if present, then edge n-grams are generatedfor each token (word). That is, the start of the n-gram is anchored to thebeginning of the token, whereas the ngram Analyzer would produce allpossible substrings from a single input token (within the defined lengthrestrictions). Edge n-grams can be used to cover word-based auto-completionqueries with an index, for which you should set the following other options:accent: false, case: "lower" and most importantly stemming: false.
      • min (number, optional): minimal n-gram length
      • max (number, optional): maximal n-gram length
      • preserveOriginal (boolean, optional): whether to include the originaltoken even if its length is less than min or greater than max
    • stopwords (array, optional): an array of strings with words to omitfrom result. Default: load words from stopwordsPath. To disable stop-wordfiltering provide an empty array []. If both stopwords andstopwordsPath are provided then both word sources are combined.
    • stopwordsPath (string, optional): path with a language sub-directory(e.g. for a locale en_US.utf-8) containing files with words to omit.Each word has to be on a separate line. Everything after the first whitespacecharacter on a line will be ignored and can be used for comments. The filescan be named arbitrarily and have any file extension (or none).

    Default: if no path is provided then the value of the environment variableIRESEARCHTEXT_STOPWORD_PATH is used to determine the path, or if it isundefined then the current working directory is assumed. If the stopwordsattribute is provided then no stop-words are loaded from files, unless anexplicit _stopwordsPath is also provided.

    Note that if the stopwordsPath can not be accessed, is missing languagesub-directories or has no files for a language required by an Analyzer,then the creation of a new Analyzer is refused. If such an issue is discovered for an existing Analyzer during startup then the server willabort with a fatal error.

    Examples

    The built-in text_en Analyzer has stemming enabled (note the word endings):

    Show execution results

    Hide execution results

    1. [
    2. [
    3. "crazi",
    4. "fast",
    5. "nosql",
    6. "databas"
    7. ]
    8. ]
    9. [object ArangoQueryCursor, count: 1, cached: false, hasMore: false]

    You may create a custom Analyzer with the same configuration but with stemmingdisabled like this:

    Show execution results

    Hide execution results

    1. {
    2. "name" : "_system::text_en_nostem",
    3. "type" : "text",
    4. "properties" : {
    5. "locale" : "en.utf-8",
    6. "case" : "lower",
    7. "stopwords" : [ ],
    8. "accent" : false,
    9. "stemming" : false
    10. },
    11. "features" : [
    12. "position",
    13. "norm",
    14. "frequency"
    15. ]
    16. }
    17. [
    18. [
    19. "crazy",
    20. "fast",
    21. "nosql",
    22. "database"
    23. ]
    24. [object ArangoQueryCursor, count: 1, cached: false, hasMore: false]

    Custom text Analyzer with the edge n-grams feature and normalization enabled,stemming disabled and "the" defined as stop-word to exclude it:

    Show execution results

    Hide execution results

    1. {
    2. "name" : "_system::text_edge_ngrams",
    3. "type" : "text",
    4. "properties" : {
    5. "locale" : "en.utf-8",
    6. "case" : "lower",
    7. "stopwords" : [
    8. "the"
    9. ],
    10. "accent" : false,
    11. "stemming" : false,
    12. "edgeNgram" : {
    13. "min" : 3,
    14. "max" : 8,
    15. "preserveOriginal" : true
    16. }
    17. },
    18. "features" : [
    19. "position",
    20. "norm",
    21. "frequency"
    22. ]
    23. }
    24. [
    25. [
    26. "qui",
    27. "quic",
    28. "quick",
    29. "bro",
    30. "brow",
    31. "brown",
    32. "fox",
    33. "jum",
    34. "jump",
    35. "jumps",
    36. "ove",
    37. "over",
    38. "dog",
    39. "dogw",
    40. "dogwi",
    41. "dogwit",
    42. "dogwith",
    43. "dogwitha",
    44. "dogwithaverylongname"
    45. ]
    46. ]
    47. [object ArangoQueryCursor, count: 1, cached: false, hasMore: false]

    The features of an Analyzer determine what term matching capabilities will beavailable and as such are only applicable in the context of ArangoSearch Views.

    The valid values for the features are dependant on both the capabilities ofthe underlying type and the query filtering and sorting functions that theresult can be used with. For example the text type will producefrequency + norm + position and the PHRASE() AQL function requiresfrequency + position to be available.

    Currently the following features are supported:

    • frequency: how often a term is seen, required for PHRASE()
    • norm: the field normalization factor
    • position: sequentially increasing term position, required for PHRASE().If present then the frequency feature is also required

    Built-in Analyzers

    The identity Analyzer has the features frequency and norm.The Analyzers of type text all tokenize strings with stemming enabled,no stopwords configured, case conversion set to lower, accent removalturned on and the features frequency, norm and position:

    NameTypeLanguage
    identityidentitynone
    text_detextGerman
    text_entextEnglish
    text_estextSpanish
    text_fitextFinnish
    text_frtextFrench
    text_ittextItalian
    text_nltextDutch
    text_notextNorwegian
    text_pttextPortuguese
    text_rutextRussian
    text_svtextSwedish
    text_zhtextChinese