Class NGramTokenFilter

  • All Implemented Interfaces:
    Closeable, AutoCloseable

    public final class NGramTokenFilter
    extends TokenFilter
    Tokenizes the input into n-grams of the given size(s). As of Lucene 4.4, this token filter:
    • handles supplementary characters correctly,
    • emits all n-grams for the same token at the same position,
    • does not modify offsets,
    • sorts n-grams by their offset in the original token first, then increasing length (meaning that "abc" will give "a", "ab", "abc", "b", "bc", "c").

    If you were using this TokenFilter to perform partial highlighting, this won't work anymore since this filter doesn't update offsets. You should modify your analysis chain to use NGramTokenizer, and potentially override NGramTokenizer.isTokenChar(int) to perform pre-tokenization.