TY - GEN

T1 - A multi-way divergence metric for vector spaces

AU - Moss, Robert

AU - Connor, Richard

PY - 2013/10/30

Y1 - 2013/10/30

N2 - The majority of work in similarity search focuses on the efficiency of threshold and nearest-neighbour queries. Similarity join has been less well studied, although efficient indexing algorithms have been shown. The multi-way similarity join, extending similarity join to multiple spaces, has received relatively little treatment. Here we present a novel metric designed to assess some concept of a mutual similarity over multiple vectors, thus extending pairwise distance to a more general notion taken over a set of values. In outline, when considering a set of values X, our function gives a single numeric outcome D(X) rather than calculating some compound function over all of d(x,y) where x,y are elements of X. D(X) is strongly correlated with various compound functions, but costs only a little more than a single distance to evaluate. It is derived from an information-theoretic distance metric; it correlates strongly with this metric, and also with other metrics, in high-dimensional spaces. Although we are at an early stage in its investigation, we believe it could potentially be used to help construct more efficient indexes, or to construct indexes more efficiently. The contribution of this short paper is simply to identify the function, to show that it has useful semantic properties, and to show also that it is surprisingly cheap to evaluate. We expect uses of the function in the domain of similarity search to follow.

AB - The majority of work in similarity search focuses on the efficiency of threshold and nearest-neighbour queries. Similarity join has been less well studied, although efficient indexing algorithms have been shown. The multi-way similarity join, extending similarity join to multiple spaces, has received relatively little treatment. Here we present a novel metric designed to assess some concept of a mutual similarity over multiple vectors, thus extending pairwise distance to a more general notion taken over a set of values. In outline, when considering a set of values X, our function gives a single numeric outcome D(X) rather than calculating some compound function over all of d(x,y) where x,y are elements of X. D(X) is strongly correlated with various compound functions, but costs only a little more than a single distance to evaluate. It is derived from an information-theoretic distance metric; it correlates strongly with this metric, and also with other metrics, in high-dimensional spaces. Although we are at an early stage in its investigation, we believe it could potentially be used to help construct more efficient indexes, or to construct indexes more efficiently. The contribution of this short paper is simply to identify the function, to show that it has useful semantic properties, and to show also that it is surprisingly cheap to evaluate. We expect uses of the function in the domain of similarity search to follow.

KW - distance metric

KW - multi-way divergence

UR - http://www.scopus.com/inward/record.url?scp=84886408668&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-41062-8_17

DO - 10.1007/978-3-642-41062-8_17

M3 - Conference contribution

AN - SCOPUS:84886408668

SN - 9783642410611

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 169

EP - 174

BT - Similarity Search and Applications - 6th International Conference, SISAP 2013, Proceedings

T2 - 6th International Conference on Similarity Search and Applications, SISAP 2013

Y2 - 2 October 2013 through 4 October 2013

ER -