Extending multi-sense word embedding to phrases and sentences for unsupervised semantic applications

HS Chang, A Agrawal, A McCallum - … of the AAAI Conference on Artificial …, 2021 - ojs.aaai.org
Proceedings of the AAAI Conference on Artificial Intelligence, 2021ojs.aaai.org
Most unsupervised NLP models represent each word with a single point or single region in
semantic space, while the existing multi-sense word embeddings cannot represent longer
word sequences like phrases or sentences. We propose a novel embedding method for a
text sequence (a phrase or a sentence) where each sequence is represented by a distinct
set of multi-mode codebook embeddings to capture different semantic facets of its meaning.
The codebook embeddings can be viewed as the cluster centers which summarize the …
Abstract
Most unsupervised NLP models represent each word with a single point or single region in semantic space, while the existing multi-sense word embeddings cannot represent longer word sequences like phrases or sentences. We propose a novel embedding method for a text sequence (a phrase or a sentence) where each sequence is represented by a distinct set of multi-mode codebook embeddings to capture different semantic facets of its meaning. The codebook embeddings can be viewed as the cluster centers which summarize the distribution of possibly co-occurring words in a pre-trained word embedding space. We introduce an end-to-end trainable neural model that directly predicts the set of cluster centers from the input text sequence during test time. Our experiments show that the per-sentence codebook embeddings significantly improve the performances in unsupervised sentence similarity and extractive summarization benchmarks. In phrase similarity experiments, we discover that the multi-facet embeddings provide an interpretable semantic representation but do not outperform the single-facet baseline.
ojs.aaai.org