Algorithm for Optimization of Keyword Extraction Based on the Application of a Linguistic Parser

Cover Page

Cite item

Full Text

Abstract

This article presents an analytical comparison between constituency parsing and dependency parsing – two types of parsing used in the field of natural language processing (NLP). The study introduces an algorithm to enhance keyword extraction, employing the noun phrase extraction feature of the parser to filter out unsuitable phrases. This algorithm is implemented using three different parsers: Spacy, AllenNLP and Stazna. The effectiveness of this algorithm was compared with two popular methods (Yake, Rake) on a dataset of English texts. Experimental results show that the proposed algorithm with the SpaCy parser is superior to other keyword extraction algorithms in terms of accuracy and speed. For the AllenNLP and Stanza parsers, our algorithm is also more accurate, but requires much longer execution time. The results obtained allow us to evaluate in more detail the advantages and disadvantages of the parsers studied in the work, as well as to determine directions for further research. The running time of the SpaCy parser is significantly less than the other two parsers because parsers that use transitions for deterministic or machine-learned set of actions to build the dependency tree step by step. They are typically faster and require less memory than graph-based parsers, making them more efficient for parsing large amounts of text. On the other hand, AllenNLP and Stanza use graph-based parsing models that rely on millions of features, which limits their ability to generalize and slows down the speed of analysis compared to transition-based parsers. The task of achieving a balance between the accuracy and speed of a linguistic parser is an open topic that requires further research due to the importance of this problem for improving the efficiency of text analysis, especially in applications that require real-time accuracy. To this end, the authors plan to conduct further research into possible solutions to achieve this balance.

About the authors

D. Yu Kravchenko

Southern Federal University

Email: dkravchenko@sfedu.ru
Nekrasovsky Lane 44

Yu. A Kravchenko

Southern Federal University

Email: krav-jura@yandex.ru
Nekrasovsky Lane 44

A. Mansour

Southern Federal University

Email: mansur@sfedu.ru
Nekrasovsky Lane 44

J. Mohammad

Southern Federal University

Email: zmohammad@sfedu.ru
Nekrasovsky Lane 44

N. S Pavlov

Southern Federal University

Email: npavlov@sfedu.ru
Nekrasovsky Lane 44

References

  1. Brown T., Mann B., Ryder N., Subbiah M., Kaplan J.D., Dhariwal P., Neelakantan A., Shyam P., Sastry G., Askell A., et al. Language models are few-shot learners // Advances in neural information processing systems. 2020. vol. 33. pp. 1877–1901.
  2. Zhang Y., Clark S. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing // Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. 2008. pp. 562–571.
  3. Gao L., Madaan A., Zhou S., Alon U., Liu P., Yang Y., Callan J., Neubig G. Pal: Program aided language models. 2023. pp. 10764–10799.
  4. Kravchenko Yu.A., Bova V.V., Kuliev E.V., Rodzin S.I. Simulation of the semantic network of knowledge representation in intelligent assistant systems based on ontological approach // Futuristic Trends in Network and Communication Technologies: Third International Conference, FTNCT. 2021. pp. 241–252.
  5. Chen D., Manning C.D. A fast and accurate dependency parser using neural networks // Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 2014. pp. 740–750.
  6. Kiperwasser E., Goldberg Y. Simple and accurate dependency parsing using bidirectional LSTM feature representations // Transactions of the Association for Computational Linguistics. 2016. vol. 4. pp. 313–327.
  7. Kulmizev A., de Lhoneux M., Gontrum J., Fano E., Nivre J. Deep Contextualized Word Embeddings in Transition-Based and Graph-Based Dependency Parsing ‒ A Tale of Two Parsers Revisited // arXiv preprint arXiv: 07397. 2019.
  8. Vasiliev Y. Natural language processing with Python and SpaCy: A practical introduction. No Starch Press, 2020. 216 p.
  9. Qi P., Zhang Y., Zhang Y., Bolton J., Manning C.D. Stanza: A Python natural language processing toolkit for many human languages // arXiv preprint arXiv: 07082. 2020.
  10. Gardner M., Grus J., Neumann M., Tafjord O., Dasigi P., Liu N., Peters M., Schmitz M., Zettlemoyer L. Allennlp: A deep semantic natural language processing platform // arXiv preprint arXiv: 07640. 2018.
  11. Yamada H., Matsumoto Y. Statistical dependency analysis with support vector machines // Proceedings of the eighth international conference on parsing technologies. 2003. pp. 195–206.
  12. Nivre J. An efficient algorithm for projective dependency parsing // Proceedings of the eighth international conference on parsing technologies. 2003. pp. 149–160.
  13. Kim G., Baldi P., McAleer S. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491. 2023.
  14. Liu B., Jiang Y., Zhang X., Liu Q., Zhang S., Biswas J., Stone P. Llm+p: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477. 2023.
  15. Pei W., Ge T., Chang B. An effective neural network model for graph-based dependency parsing // Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. 2015. vol. 1. pp. 313–322.
  16. McDonald R., Crammer K., Pereira F. Online large-margin training of dependency parsers // Proceedings of the 43rd annual meeting of the association for computational linguistics (ACL’05). 2005. pp. 91–98.
  17. Eisner J. Three new probabilistic models for dependency parsing: An exploration // arXiv preprint cmp-lg/ 9706003. 1997.
  18. Tenney I., Das D., Pavlick E. BERT rediscovers the classical NLP pipeline // arXiv preprint arXiv: 05950. 2019.
  19. Hewitt J., Manning C.D. A structural probe for finding syntax in word representations // Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2019. vol. 1. pp. 4129–4138.
  20. Dozat T., Manning C.D. Deep biaffine attention for neural dependency parsing // arXiv preprint arXiv: 01734. 2016.
  21. Mao X., Huang S., Li R., Shen L. Automatic keywords extraction based on co-occurrence and semantic relationships between words // IEEE Access. 2020. vol. 8. pp. 117528–117538.
  22. Yang S., Nachum O., Du Y., Wei J., Abbeel P., Schuurmans D. Foundation models for decision making: Problems, methods, and opportunities. arXiv preprint arXiv:2303.04129. 2023.
  23. Honnibal M., Johnson M. An Improved Non-monotonic Transition System for Dependency Parsing. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing ‒ Lisbon, Portugal: Association for Computational Linguistics. 2015. pp. 1373–1378. doi: 10.18653/v1/D15-1162.
  24. Campos R., Mangaravite V., Pasquali A., Jorge A., Nunes C., Jatowt A. YAKE! Keyword extraction from single documents using multiple local features // Information Sciences. 2020. vol. 509. pp. 257–289.
  25. Rose S., Engel D., Cramer N., Cowley W. Automatic keyword extraction from individual documents // Text mining: applications theory. 2010. pp. 1–20.
  26. Hulth A. Improved automatic keyword extraction given more linguistic knowledge // Proceedings of the 2003 conference on Empirical methods in natural language processing. 2003. pp. 216–223.

Supplementary files

Supplementary Files
Action
1. JATS XML

Согласие на обработку персональных данных

 

Используя сайт https://journals.rcsi.science, я (далее – «Пользователь» или «Субъект персональных данных») даю согласие на обработку персональных данных на этом сайте (текст Согласия) и на обработку персональных данных с помощью сервиса «Яндекс.Метрика» (текст Согласия).