The GI therefore proposes the following iterative procedure, which can be likened preciso forms of ‘bootstrapping’

The GI therefore proposes the following iterative procedure, which can be likened preciso forms of ‘bootstrapping’

Let quantita represent an unknown document and let y represent verso random target author’s stylistic ‘profile’. During one hundred iterations, it will randomly select (a) fifty verso cent of the available stylistic features available (di nuovo.g. word frequencies) and (b) thirty distractor authors, or ‘impostors’ from per pool of similar texts. Mediante each iteration, the GI will compute whether incognita is closer preciso y than puro any of the profiles by the thirty impostors, given the random selection of stylistic features per that iteration. Instead of basing the verification of the direct (first-order) distance between x and y, the GI proposes esatto superiorita the proportion of iterations con which x was indeed closer puro y than to one of the distractors sampled. This proportion can be considered per second-order metric and will automatically be per probability between zero and one, indicating the robustness of the identification of the authors of incognita and y. Our previous rete informatica has already demonstrated that the GI system produces excellent verification results for classical Latin prose.31 31 Padrino the setup sopra Stover, et al, ‘Computational authorship verification method’ (n. 27, above). Our verification code is publicly available from the following repository: This code is described con: M. Kestemont et al. ‘Authenticating the writings’ (n. 29, above).

For modern documents, Koppel and Winter were even able esatto report encouraging scores for document sizes as small as 500 words

We have applied a generic implementation of the GI puro the HA as follows: we split the individual lives into consecutive samples of 1000 words (i.anche. space-free strings of alphabetic characters), after removing all punctuation.32 32 Previous research (see the publications mentioned per the previous two libretto) suggests that 1,000 words is per reasonable document size sopra this context. Each of these samples was analysed individually by pairing it with the profile of one of the HA’s six alleged authors, including the profile consisting of the rest of the samples from its own text. We represented the sample (the ‘anonymous’ document) by per vector comprising the divisee frequencies of the 10,000 most frequent tokens durante the entire HA. For each author’s profile, we did the same, although the profile’s vector comprises the average relative frequency of the 10,000 words. Thus, the profiles would be the so-called ‘mean centroid’ of all individual document vectors for verso particular author (excluding, of course, the current anonymous document).33 33 Koppel and Seidman, ‘Automatically identifying’ (n. 30, above). Note that the use of a scapolo centroid verso author aims onesto veterano, at least partially, the skewed nature of our data, since some authors are much more strongly represented durante the corpus or sostrato pool than others. If we were not using centroids but mere text segments, they would have been automaticallysampled more frequently than others during the imposter bootstrapping.

To visitatori largefriends the left, per clustering has been added on primo posto of the rows, reflecting which groups of samples behave similarly

Next, we ran the verification approach. During one hundred iterations, we would randomly select 5,000 of the available word frequencies. We would also randomly sample thirty impostors from verso large ‘impostor pool’ of documents by Latin authors, including historical writers such as Suetonius and Livy.34 34 See Appendix 2 for the authors sampled. The pool of impostor texts can be inspected durante the code repository for this paper. Durante each iteration, we would check whether the anonymous document was closer sicuro the current author’s profile than sicuro any of the impostors sampled. Con this study, we use the ‘minmax’ metric, which was recently introduced per the context of the GI framework.35 35 See Koppel and Winter, ‘Determining if two documents’ (n. 26, above). For each combination of an anonymous text and one of the six target authors’ profiles, we would superiorita the proportion of iterations (i.anche. per probability between niente and one) in which the anonymous document would indeed be attributed esatto the target author. The resulting probability table is given sopra full in the appendix onesto this paper. Although we present verso more detailed tete-a-tete of this data below, we have added Figure 1 below as an intuitive visualization of the overall results of this approach. This is a heatmap visualisation of the result of the GI algorithm for 1,000 word samples from the lives con the HA. Cell values (darker colours mean higher values) represent the probability of each sample being attributed esatto one of the alleged HA authors, rather than an imposter from a random selection of distractors.

本站部分资源来自互联网,原创类软件和文章为站长个人兴趣研究,仅供参考之用,不得用于任何的商业用途。版权归原公司所有!相关软件下载试用后请24小时内删除,因下载本站资源造成的损失,全部责任由使用者本人承担!
PopMars-专注共享资源 » The GI therefore proposes the following iterative procedure, which can be likened preciso forms of ‘bootstrapping’

发表评论