Integrating Distributional Lexical Contrast into Word Embeddings for Antonym-Synonym Distinction
Kim-Anh Nguyen, Sabine Schulte im Walde, Ngoc Thang Vu
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (ACL), pp. 454–459, 2016.
Abstract
We propose a novel vector representation that integrates lexical contrast into distributional vectors and strengthens the most salient features for determining degrees of word similarity. The improved vectors significantly outperform standard models and distinguish antonyms from synonyms with an average precision of 0.66–0.76 across word classes (adjectives, nouns, verbs). Moreover, we integrate the lexical contrast vectors into the objective function of a skip-gram model. The novel embedding outperforms state-of-the-art models on predicting word similarities in SimLex-999, and on distinguishing antonyms from synonyms.Links
doi: 10.18653/v1/P16-2074
BibTeX
@inproceedings{nguyen16_acl,
title = {Integrating Distributional Lexical Contrast into Word Embeddings for Antonym-Synonym Distinction},
author = {Nguyen, Kim-Anh and {Schulte im Walde}, Sabine and Vu, Ngoc Thang},
year = {2016},
booktitle = {Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (ACL)},
pages = {454–459},
doi = {10.18653/v1/P16-2074}
}