6.8. Pairwise metrics, Affinities and Kernels
This module contains both distance metrics and kernels. A brief summary isgiven on the two here.
Distance metrics are functions d(a, b)
such that d(a, b) < d(a, c)
if objects a
and b
are considered “more similar” than objects a
and c
. Two objects exactly alike would have a distance of zero.One of the most popular examples is Euclidean distance.To be a ‘true’ metric, it must obey the following four conditions:
Kernels are measures of similarity, i.e. s(a, b) > s(a, c)
if objects a
and b
are considered “more similar” than objectsa
and c
. A kernel must also be positive semi-definite.
There are a number of ways to convert between a distance metric and asimilarity measure, such as a kernel. Let D
be the distance, and S
bethe kernel:
The distances between the row vectors of and the row vectors of Y
can be evaluated using pairwise_distances
. If Y
is omitted thepairwise distances of the row vectors of X
are calculated. Similarly, can be used to calculate the kernel between X
and Y
using different kernel functions. See the API reference for moredetails.
>>>
cosine_similarity
computes the L2-normalized dot product of vectors.That is, if
and are row vectors,their cosine similarity is defined as:
This is called cosine similarity, because Euclidean (L2) normalizationprojects the vectors onto the unit sphere,and their dot product is then the cosine of the angle between the pointsdenoted by the vectors.
This kernel is a popular choice for computing the similarity of documentsrepresented as tf-idf vectors. accepts scipy.sparse
matrices.(Note that the tf-idf functionality in sklearn.feature_extraction.text
can produce normalized vectors, in which case cosine_similarity
is equivalent to , only slower.)
References:
6.8.2. Linear kernel
The function computes the linear kernel, that is, aspecial case of polynomial_kernel
with degree=1
and coef0=0
(homogeneous).If and y
are column vectors, their linear kernel is:
The polynomial kernel is defined as:
where:
If
the kernel is said to be homogeneous.
6.8.4. Sigmoid kernel
The function sigmoid_kernel
computes the sigmoid kernel between twovectors. The sigmoid kernel is also known as hyperbolic tangent, or MultilayerPerceptron (because, in the neural network field, it is often used as neuronactivation function). It is defined as:
where:
The function computes the radial basis function (RBF) kernelbetween two vectors. This kernel is defined as:
where x
and y
are the input vectors. If
the kernel is known as the Gaussian kernel of variance.
6.8.6. Laplacian kernel
The function is a variant on the radial basisfunction kernel defined as:
where x
and y
are the input vectors and
is theManhattan distance between the input vectors.
It has proven useful in ML applied to noiseless data.See e.g. Machine learning for quantum mechanics in a nutshell.
The chi-squared kernel is a very popular choice for training non-linear SVMs incomputer vision applications.It can be computed using and then passed to ansklearn.svm.SVC
with kernel="precomputed"
:
>>>
It can also be directly used as the argument:
>>>
The chi squared kernel is given by
The data is assumed to be non-negative, and is often normalized to have an L1-norm of one.The normalization is rationalized with the connection to the chi squared distance,which is a distance between discrete probability distributions.
The chi squared kernel is most commonly used on histograms (bags) of visual words.
References:
- Zhang, J. and Marszalek, M. and Lazebnik, S. and Schmid, C.Local features and kernels for classification of texture and objectcategories: A comprehensive studyInternational Journal of Computer Vision 2007