Natural text: mathematical methods of attribution

Бесплатный доступ

The article proposes two algorithms for substandard texts filtering. The first of these is based on the fact that the frequency of n -grams occurrence in a qualitytext obeys the Zipf law, and when the words of the text are rearranged, the law ceases to act. Comparison of the frequency characteristics of the source text with the characteristics of the text resulting from the permutation of words enables researchers to draw conclusions regarding the quality of the source text. The second algorithm is based on calculating and comparing the rate new words appear in good quality and randomly generated texts. In a good text, this rate is, as a rule, uneven whereas in randomly generated texts, this unevenness is smoothed out, which makes it possible to detect low-quality texts. The methods for solving the problem of substandard texts filtering are statistical and are based on the calculation of various frequency characteristics of the text. As compared to the “bag of words” model, a graph model of the text, in which the vertices are words or word forms, and the edges are pairs of words, as well as models with higher order structures, in which the frequency characteristics of n -grams are used with n > 2, takes into account the mutual disposition of word pairs, as well as triples of words in a common part of the text, for example,in one sentence or one n -gram.

Еще

Natural text, pseudotext, text filtering, zipf's law, n-grams, the rate of appearance of new words, "bag of words" model of the text, graph model of the text, n-граммы

Короткий адрес: https://sciup.org/149129962

IDR: 149129962   |   DOI: 10.15688/jvolsu2.2019.2.13

Статья научная