Representation of Words in Natural Language Processing: ? Survey

Authors

  • Y. A. Losieva Taras Shevchenko National University of Kyiv

DOI:

https://doi.org/10.17721/1812-5409.2019/2.10

Abstract

The article is devoted to research to the state-of-art vector representation of words in natural language processing. Three main types of vector representation of a word are described, namely: static word embeddings, use of deep neural networks for word representation and dynamic) word embeddings based on the context of the text. This is a very actual and much-demanded area in natural language processing, computational linguistics and artificial intelligence at all. Proposed to consider several different models for vector representation of the word (or word embeddings), from the simplest (as a representation of text that describes the occurrence of words within a document or learning the relationship between a pair of words) to the multilayered neural networks and deep bidirectional transformers for language understanding, are described chronologically in relation to the appearance of models. Improvements regarding previous models are described, both the advantages and disadvantages of the presented models and in which cases or tasks it is better to use one or another model.

Key words: artificial intelligence, natural language processing, computational linguistics, word embeddings.

Pages of the article in the issue: 82 - 87

Language of the article: English

References

MILАJEVS D., KАRTSАKLIS D., SАDRZАDEH M., PURVER M. (2014) Evaluating Neural Word Representations in Tensor-Based Compositional Settings, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 708-719.

MIKOLOV T., CHEN K., CORRАDO G., DEАN J. (2013) Efficient Estimation of Word Representations in Vector Space.

MIKOLOV T., SUTSKEVER I., CHEN K. (2013) Distributed Representations of Words and Phrases and their Compositionality.

JOULIN А., GRАVE E., BOJАNOWSKI P., MIKOLOV T. (2016) Bag of Tricks for Efficient Text Classification.

BOJАNOWSKI P., GRАVE E., JOULIN А., MIKOLOV T. (2017) Enriching Word Vectors with Subword Information.

PENNINGTON J., SOCHER R., MАNNING C. (2014) GloVe: Global Vectors for Word Representation, Аssociation for Computational Linguistics, pp. 1532-1543.

SUTSKEVER I., VINУАLS O., LE Q. (2014) Sequence to Sequence Learning with Neural Networks.

VINIАLS O., BENGIO S., KUDLUR M. (2016) Order Matters: Sequence to sequence for sets, ICLR 2016.

PRАBHАVАLKАR R., et al. (2017) A Comparison of Sequence-to-Sequence Models for Speech Recognition, ISCА, pp. 939-943.

VENUGOPАLАN S. et al. (2015) Sequence to Sequence – Video to Text, Computer Vision Foundation, pp. 4534-4542.

PETERS M. et al. (2018) Deep contextualized word representations.

DEVLIN J., CHАNG M., LEE K., TOUTАNOVА K. (2019) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.

Downloads

Issue

Section

Computer Science and Informatics

How to Cite

Representation of Words in Natural Language Processing: ? Survey. (2019). Bulletin of Taras Shevchenko National University of Kyiv. Physical and Mathematical Sciences, 2, 82-87. https://doi.org/10.17721/1812-5409.2019/2.10