Synchronic and Diachronic Predictors of Socialness Ratings of Words

Мұқаба

Дәйексөз келтіру

Толық мәтін

Аннотация

Introduction: In recent works, a new psycholinguistic concept has been introduced and

studied that is socialness of a word. A socialness rating reflects word social significance and

dictionaries with socialness ratings have been compiled using either a survey or machine

method. Unfortunately, the size of the dictionaries with word socialness ratings created by a

survey method is relatively small.

Purpose: The study objective is to compile a large dictionary with English word socialness

ratings by using machine extrapolation, transfer the rating estimations to other languages as

well as to obtain diachronic models of socialness ratings.

Method: The socialness ratings of words are estimated using multilayer direct propagation

neural networks. To obtain synchronic estimates, pre-trained fasttext vectors were fed to the

input. To obtain diachronic estimates, word co-occurrence statistics in a large diachronic corpus

was used.

Results: The obtained Spearman`s correlation coefficient between human socialness ratings

and machine ones is 0.869. The trained models allowed obtaining socialness ratings for 2

million English words, as well as a wide range of words in 43 other languages. An unexpected

result is that the linear model provides highly accurate estimate of the socialness ratings,

which can be hardly further improved. Apparently, this is due to the fact that in the space of

vectors representing words there is a selected direction responsible for meanings associated

with socialness driven by of social factors influencing word representation and use. The article

also presents a diachronic neural network predictor of concreteness ratings using word co-

occurrence vectors as input data. It is shown that using a one-year data from a large diachronic

corpus Google Books Ngram one can obtain accuracy comparable to the accuracy of synchronic

estimates.

Conclusion: The created large machine dictionary of socialness ratings can be used in

psycholinguistic and cultural studies. Changes in socialness ratings can serve as a marker of

word meaning change and be used in lexical semantic change detection

Авторлар туралы

Vladimir Bochkarev

Kazan Federal University

Email: vbochkarev@mail.ru
ORCID iD: 0000-0001-8792-1491
Kazan, Russia

Anna Shevlyakova

Kazan Federal University

Email: anna_ling@mail.ru
ORCID iD: 0000-0002-2659-1887
Kazan, Russia

Andrey Achkeev

Kazan Federal University

Email: fermion@list.ru
ORCID iD: http://orcid.org/0000-0002-6558-3521
Kazan, Russia

Әдебиет тізімі

  1. Basile, P., & McGillivray, B. (2018). Exploiting the web for semantic change detection. Lecture Notes in Computer Science, 11198, 194-208. DOI:https://doi.org/10.1007/978-3-030-01771-2_13
  2. Binder, J.R., Conant, L.L., Humphries, C.J., Fernandino, L., Simons, S.B., Aguilar, M., & Desai, R.H. (2016). Toward a brain-based componential semantic representation. Cognitive Neuropsychology, 33(3-4), 130-74. DOI:https://doi.org/10.1080/02643294.2016.1147426
  3. Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2017). Enriching Word Vectors with Subword Information. Transactions of the Association for Computational Linguistics, 5, 135-146. DOI:https://doi.org/10.1162/tacl_a_00051
  4. Bochkarev, V. V., Savinkov, A. V., & Shevlyakova, A. V. (2021). Estimation of imageability ratings of English words using neural networks. In I. Batyrshin, A. Gelbukh, & G. Sidorov (Eds.), Advances in Soft Computing. MICAI 2021. Lecture Notes in Computer Science (vol. 13068, pp. 59-69). Springer. DOI:https://doi.org/10.1007/978-3-030-89820-5_5
  5. Bochkarev, V. V., Khristoforov, S. V., Shevlyakova, A. V., & Solovyev, V. D. (2022). Neural network algorithm for detection of new word meanings denoting named entities. IEEE Access, 10, 68499-68512. DOI:https://doi.org/10.1109/ACCESS.2022.3186681
  6. Bochkarev, V., Khristoforov, S., Shevlyakova, A., & Solovyev, V. (2024). Diachronic analysis of a word concreteness rating: Impact of semantic change. Lobachevskii Journal of Mathematics, 45, 961-971. DOI:https://doi.org/10.1134/S1995080224600559
  7. Bochkarev, V., Solovyev, V., Nestik, T., & Shevlyakova, A. (2024). Variations in average word valence of Russian books over a century and social change. Journal of Mathematical Sciences, 285, 14-27. DOI:https://doi.org/10.1007/s10958-024-07419-z
  8. Buechel, S., & Hahn, U. (2018). Word emotion induction for multiple languages as a deep multi-task learning problem. Proceedings of NAACL-HLT 2018 (pp. 1907-1918). Association for Computational Linguistics. DOI:https://doi.org/10.18653/v1/N18-1173
  9. Bullinaria, J., & Levy, J. (2007). Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior Research Methods, 39, 510-526. DOI:https://doi.org/10.3758/BF03193020
  10. Bullinaria, J. A., & Levy, J. P. (2012). Extracting semantic representations from word co-occurrence statistics: Stop-lists, stemming, and SVD. Behavior Research Methods, 44(3), 890-907. DOI:https://doi.org/10.3758/s13428-011-0183-8
  11. Charbonnier, J., & Wartena, C. (2019). Predicting word concreteness and imagery. Proceedings of the 13th International Conference on Computational Semantics - Long Papers (pp. 176-187). Association for Computational Linguistics. DOI:https://doi.org/10.18653/v1/W19-0415
  12. Conneau, A., Lample, G., Denoyer, L., Ranzato, M. A., & Jégou, H. (2017). Word translation without parallel data. arXiv preprint arXiv:1710.04087.
  13. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (vol. 1: Long and Short Papers, pp. 4171-4186). Association for Computational Linguistics.
  14. Diveica, V., Pexman, P. M., & Binney, R. J. (2023). Quantifying social semantics: An inclusive definition of socialness and ratings for 8388 English words. Behavior Research Methods, 55, 461-473. DOI:https://doi.org/10.3758/s13428-022-01810-x
  15. Firth, J. (1957). A synopsis of linguistic theory, 1930-55. Studies in linguistic analysis (Special Volume of the Philological Society, pp. 1-31). Blackwell.
  16. Grave, E., Bojanowski, P., Gupta, P., Joulin, A., & Mikolov, T. (2018). Learning word vectors for 157 languages. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). ELRA.
  17. Gulordava, K., & Baroni, M. (2011). A distributional similarity approach to the detection of semantic change in the Google Books Ngram corpus. Proceedings of the GEMS 2011 Workshop on Geometrical Models of Natural Language Semantics (pp. 67-71). Association for Computational Linguistics.
  18. Harris, Z. (1970). Papers in structural and transformational linguistics. Reidel.
  19. Hengchen, S., Tahmasebi, N., Schlechtweg, D., & Dubossarsky, H. (2021). Challenges for computational lexical semantic change. In N. Tahmasebi, L. Borin, A. Jatowt, Y. Xu, & S. Hengchen (Eds.), Computational approaches to semantic change (pp. 341-372). Language Science Press.
  20. Joulin, A., Bojanowski, P., Mikolov, T., Jégou, H., & Grave, E. (2018). Loss in translation: Learning bilingual word mapping with a retrieval criterion. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (pp. 2979-2984). Association for Computational Linguistics.
  21. Khristoforov, S., Bochkarev, V., & Shevlyakova, A. (2020). Recognition of parts of speech using the vector of bigram frequencies. In W. van der Aalst et al. (Eds.), Analysis of images, social networks and texts. AIST 2019, Communications in Computer and Information Science (vol. 1086, pp. 132-142). Springer. DOI:https://doi.org/10.1007/978-3-030-39575-9_13
  22. Koper, M., & Schulte im Walde, S. (2016). Automatically generated affective norms of abstractness, arousal, imageability and valence for 350,000 German lemmas. Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16) (pp. 2595-2598). ELRA.
  23. Lin, Y., Michel, J.-B., Aiden, E. L., Orwant, J., Brockman, W., & Petrov, S. (2012). Syntactic annotations for the Google Books Ngram Corpus. In H. Li, C.-Y. Lin, M. Osborne, G. G. Lee, & J. C. Park (Eds.), 50th Annual Meeting of the Association for Computational Linguistics 2012: Proceedings of the Conference (vol. 2, pp. 238-242). Association for Computational Linguistics.
  24. Linzmayer, O. (2004). Apple confidential 2.0: The definitive history of the world's most colorful company (2nd ed.). No Starch Press.
  25. Mikolov, T., Sutskever, I., Chen, K., Corrado, G., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems (vol. 26, pp. 3111-3119). Curran Associates, Inc.
  26. Mohammad, S., Kiritchenko, S., & Zhu, X. (2013). NRC-Canada: Building the state-of-the-art in sentiment analysis of tweets. Second Joint Conference on Lexical and Computational Semantics (SEM) (vol. 2: Proceedings of the Seventh International Workshop on Semantic Evaluation, pp. 321-327). Association for Computational Linguistics.
  27. Pantel, P. (2005). Inducing ontological co-occurrence vectors. Proceedings of the 43rd Conference of the Association for Computational Linguistics (pp. 125-132). Association for Computational Linguistics. DOI:https://doi.org/10.3115/1219840.1219856
  28. Pennington, J., Socher, R., & Manning, C. (2014). GloVe: Global vectors for word representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 1532-1543). Association for Computational Linguistics. DOI:https://doi.org/10.3115/v1/D14-1162
  29. Peters, M., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational.
  30. Linguistics: Human Language Technologies (vol. 1: Long Papers, pp. 2227-2237). Association for Computational Linguistics. DOI:https://doi.org/10.18653/v1/N18-1202
  31. Pexman, P. M., Diveica, V., & Binney, R. J. (2022). Social semantics: The organization and grounding of abstract concepts. Philosophical Transactions of the Royal Society B: Biological Sciences, 378(1870), 20210363. DOI:https://doi.org/10.1098/rstb.2021.0363
  32. Pilehvar, M. T., & Camacho-Collados, J. (2020). Embeddings in natural language processing: Theory and advances in vector representations of meaning. Morgan & Claypool Publishers.
  33. Rubenstein, H., & Goodenough, J. (1965). Contextual correlates of synonymy.Communications of the ACM, 8(10), 627-633. DOI:https://doi.org/10.1145/365628.365657
  34. Ryzhova, A., Ryzhova, D., & Sochenkov, I. (2021). Detection of semantic changes in Russian nouns with distributional models and grammatical features. Komp'juternaja Lingvistika i Intellektual'nye Tehnologii, 20, 597-606.
  35. Schweizer, P., & Schweizer, R. (2004). The Bushes: Portrait of a dynasty (1st ed.). Doubleday.
  36. Tang, X. (2018). A state-of-the-art of semantic change computation. Natural Language Engineering, 24(5), 649-676. DOI:https://doi.org/10.1017/S1351324918000220
  37. Turney, P. D., & Pantel, P. (2010). From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37(1), 141-188. DOI:https://doi.org/10.1613/jair.2934
  38. Wang, S., Zhang, Y., Shi, W., et al. (2023). A large dataset of semantic ratings and its computational extension. Scientific Data, 10, 106. DOI:https://doi.org/10.1038/s41597-023-01995-6
  39. Weeds, J., Weir, D., & McCarthy, D. (2004). Characterising measures of lexical distributional similarity. Proceedings of the 20th International Conference on Computational Linguistics (pp. 1015-1021). COLING.
  40. Worth, P. (2023). Word embeddings and semantic spaces in natural language processing.International Journal of Intelligence Science, 13, 1-21. DOI:https://doi.org/10.4236/ijis.2023.131001
  41. Xu, Y., & Kemp, C. (2015). A computational evaluation of two laws of semantic change. Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015). Association for Computational Linguistics.

Қосымша файлдар

Қосымша файлдар
Әрекет
1. JATS XML


Creative Commons License
Бұл мақала лицензия бойынша қолжетімді Creative Commons Attribution 4.0 International License.