A strategy for computing the collection language model.
This class acts as the base class for the implementations of the first normalization of the informative content in the DFR framework.
Implementation used when there is no aftereffect.
Model of the information gain based on the ratio of two Bernoulli processes.
Model of the information gain based on Laplace's law of succession.
Axiomatic approaches for IR.
F1EXP is defined as Sum(tf(term_doc_freq)*ln(docLen)*IDF(term)) where IDF(t) = pow((N+1)/df(t), k) N=total num of docs, df=doc freq
F1LOG is defined as Sum(tf(term_doc_freq)*ln(docLen)*IDF(term)) where IDF(t) = ln((N+1)/df(t)) N=total num of docs, df=doc freq
F2EXP is defined as Sum(tfln(term_doc_freq, docLen)*IDF(term)) where IDF(t) = pow((N+1)/df(t), k) N=total num of docs, df=doc freq
F2EXP is defined as Sum(tfln(term_doc_freq, docLen)*IDF(term)) where IDF(t) = ln((N+1)/df(t)) N=total num of docs, df=doc freq
F2EXP is defined as Sum(tf(term_doc_freq)*IDF(term)-gamma(docLen, queryLen)) where IDF(t) = pow((N+1)/df(t), k) N=total num of docs, df=doc freq gamma(docLen, queryLen) = (docLen-queryLen)*queryLen*s/avdl
F2EXP is defined as Sum(tf(term_doc_freq)*IDF(term)-gamma(docLen, queryLen)) where IDF(t) = ln((N+1)/df(t)) N=total num of docs, df=doc freq gamma(docLen, queryLen) = (docLen-queryLen)*queryLen*s/avdl
This class acts as the base class for the specific basic model implementations in the DFR framework.
Limiting form of the Bose-Einstein model.
Implements the approximation of the binomial model with the divergence for DFR.
Geometric as limiting form of the Bose-Einstein model.
An approximation of the I(ne) model.
The basic tf-idf model of randomness.
Tf-idf model of randomness, based on a mixture of Poisson and inverse document frequency.
Implements the Poisson approximation for the binomial model for DFR.
Stores all statistics commonly used ranking methods.
Simple similarity that gives terms a score that is equal to their query boost.
Expert: Default scoring implementation which
Implements the Divergence from Independence (DFI) model based on Chi-square statistics (i.e., standardized Chi-squared distance from independence in term frequency tf).
Implements the divergence from randomness (DFR) framework introduced in Gianni Amati and Cornelis Joost Van Rijsbergen.
The probabilistic distribution used to model term occurrence in information-based models.
The smoothed power-law (SPL) distribution for the information-based framework that is described in the original paper.
Provides a framework for the family of information-based models, as described in Stéphane Clinchant and Eric Gaussier.
Computes the measure of divergence from independence for DFI scoring functions.
Normalized chi-squared measure of distance from independence
Saturated measure of distance from independence
Standardized measure of distance from independence
The lambda (λw) parameter in information-based models.
Computes lambda as
Computes lambda as
Bayesian smoothing using Dirichlet priors.
Language model based on the Jelinek-Mercer smoothing method.
Abstract superclass for language modeling Similarities.
Stores the collection distribution of the current term.
Implements the CombSUM method for combining evidence from multiple similarity values described in: Joseph A.
This class acts as the base class for the implementations of the term frequency normalization methods in the DFR framework.
Implementation used when there is no normalization.
Normalization model that assumes a uniform distribution of the term frequency.
Normalization model in which the term frequency is inversely related to the length.
Dirichlet Priors normalization
Provides the ability to use a different
Similarity defines the components of Lucene scoring.
Stores the weight for a query across the indexed collection.
A subclass of
Similarityserves as the base for ranking functions. For searching, users can employ the models already implemented or create their own by extending one of the classes in this package.
BM25Similarity is an optimized
implementation of the successful Okapi BM25 model.
SimilarityBase provides a basic
implementation of the Similarity contract and exposes a highly simplified
interface, which makes it an ideal starting point for new ranking functions.
Lucene ships the following methods built on
SimilarityBaseis not optimized to the same extent as
BM25Similarity, a difference in performance is to be expected when using the methods listed above. However, optimizations can always be implemented in subclasses; see below.
Chances are the available Similarities are sufficient for all your searching needs. However, in some applications it may be necessary to customize your Similarity implementation. For instance, some applications do not need to distinguish between shorter and longer documents (see a "fair" similarity).
Similarity, one must do so for both indexing and
searching, and the changes must happen before
either of these actions take place. Although in theory there is nothing stopping you from changing mid-stream, it
just isn't well-defined what is going to happen.
To make this change, implement your own
you'll want to simply subclass an existing method, be it
ClassicSimilarity or a descendant of
then register the new class by calling
before indexing and
The easiest way to quickly implement a new ranking method is to extend
SimilarityBase, which provides
basic implementations for the low level . Subclasses are only required to
SimilarityBase.score(BasicStats, float, float)
Another option is to extend one of the frameworks
Similarities are implemented modularly, e.g.
computation of the three parts of its formula to the classes
Normalization. Instead of
subclassing the Similarity, one can simply introduce a new basic model and tell
DFRSimilarity to use it.
If you are interested in use cases for changing your similarity, see the Lucene users's mailing list at Overriding Similarity. In summary, here are a few use cases:
org.apache.lucene.misc gives small
increases as the frequency increases a small amount
and then greater increases when you hit the "sweet spot", i.e. where
you think the frequency of terms is more significant.
Overriding tf — In some applications, it doesn't matter what the score of a document is as long as a matching term occurs. In these cases people have overridden Similarity to return 1 from the tf() method.
Changing Length Normalization — By overriding
it is possible to discount how the length of a field contributes
to a score. In
lengthNorm = 1 / (numTerms in field)^0.5, but if one changes this to be
1 / (numTerms in field), all fields will be treated
[One would override the Similarity in] ... any situation where you know more about your data then just that it's "text" is a situation where it *might* make sense to to override your Similarity method.