results for au:Romanov_A in:cs

- Nov 02 2017 cs.DM arXiv:1711.00189v1We consider perfect 1-error correcting codes over a finite field with $q$ elements (briefly $q$-ary 1-perfect codes). In this paper, a generalized concatenation construction for $q$-ary 1-perfect codes is presented that allows us to construct $q$-ary 1-perfect codes of length $(q - 1)nm + n + m$ from the given $q$-ary 1-perfect codes of length $n =(q^{s_1} - 1) / (q - 1)$ and $m = (q^{s_2} - 1) / (q - 1)$, where $s_1, s_2$ are natural numbers not less than two. This construction allows us to also construct $q$-ary codes with parameters $(q^{s_1 + s_2}, q^{q^{s_1 + s_2} - (s_1 + s_2) - 1}, 3)_q$ and can be regarded as a $q$-ary analogue of the well-known Phelps construction.
- Learning a better representation with neural networks is a challenging problem, which was tackled extensively from different prospectives in the past few years. In this work, we focus on learning a representation that could be used for a clustering task and introduce two novel loss components that substantially improve the quality of produced clusters, are simple to apply to an arbitrary model and cost function, and do not require a complicated training procedure. We evaluate them on two most common types of models, Recurrent Neural Networks and Convolutional Neural Networks, showing that the approach we propose consistently improves the quality of KMeans clustering in terms of Adjusted Mutual Information score and outperforms previously proposed methods.
- The paper deals with the perfect 1-error correcting codes over a finite field with $q$ elements (briefly $q$-ary 1-perfect codes). We show that the orthogonal code to the $q$-ary non-full-rank 1-perfect code of length $n = (q^{m}-1)/(q-1)$ is a $q$-ary constant-weight code with Hamming weight equals to $q^{m - 1}$ where $m$ is any natural number not less than two. We derive necessary and sufficient conditions for $q$-ary 1-perfect codes of non-full rank. We suggest a generalization of the concatenation construction to the $q$-ary case and construct the ternary 1-perfect codes of length 13 and rank 12.
- In this paper, we propose to use a set of simple, uniform in architecture LSTM-based models to recover different kinds of temporal relations from text. Using the shortest dependency path between entities as input, the same architecture is used to extract intra-sentence, cross-sentence, and document creation time relations. A "double-checking" technique reverses entity pairs in classification, boosting the recall of positive cases and reducing misclassifications between opposite classes. An efficient pruning algorithm resolves conflicts globally. Evaluated on QA-TempEval (SemEval2015 Task 5), our proposed technique outperforms state-of-the-art methods by a large margin.
- Dec 30 2016 cs.CL arXiv:1612.08994v2One of the major goals in automated argumentation mining is to uncover the argument structure present in argumentative text. In order to determine this structure, one must understand how different individual components of the overall argument are linked. General consensus in this field dictates that the argument components form a hierarchy of persuasion, which manifests itself in a tree structure. This work provides the first neural network-based approach to argumentation mining, focusing on the two tasks of extracting links between argument components, and classifying types of argument components. In order to solve this problem, we propose to use a joint model that is based on a Pointer Network architecture. A Pointer Network is appealing for this task for the following reasons: 1) It takes into account the sequential nature of argument components; 2) By construction, it enforces certain properties of the tree structure present in argument relations; 3) The hidden representations can be applied to auxiliary tasks. In order to extend the contribution of the original Pointer Network model, we construct a joint model that simultaneously attempts to learn the type of argument component, as well as continuing to predict links between argument components. The proposed joint model achieves state-of-the-art results on two separate evaluation corpora, achieving far superior performance than a regular Pointer Network model. Our results show that optimizing for both tasks, and adding a fully-connected layer prior to recurrent neural network input, is crucial for high performance.
- Dec 13 2016 cs.CL arXiv:1612.03216v2In this work, we present a new dataset for computational humor, specifically comparative humor ranking, which attempts to eschew the ubiquitous binary approach to humor detection. The dataset consists of tweets that are humorous responses to a given hashtag. We describe the motivation for this new dataset, as well as the collection process, which includes a description of our semi-automated system for data collection. We also present initial experiments for this dataset using both unsupervised and supervised approaches. Our best supervised system achieved 63.7% accuracy, suggesting that this task is much more difficult than comparable humor detection tasks. Initial experiments indicate that a character-level model is more suitable for this task than a token-level model, likely due to a large amount of puns that can be captured by a character-level model.
- Dec 13 2016 cs.CL arXiv:1612.03205v1Language generation tasks that seek to mimic human ability to use language creatively are difficult to evaluate, since one must consider creativity, style, and other non-trivial aspects of the generated text. The goal of this paper is to develop evaluation methods for one such task, ghostwriting of rap lyrics, and to provide an explicit, quantifiable foundation for the goals and future directions of this task. Ghostwriting must produce text that is similar in style to the emulated artist, yet distinct in content. We develop a novel evaluation methodology that addresses several complementary aspects of this task, and illustrate how such evaluation can be used to meaningfully analyze system performance. We provide a corpus of lyrics for 13 rap artists, annotated for stylistic similarity, which allows us to assess the feasibility of manual evaluation for generated verse.
- In this paper, we propose a construction of full-rank q-ary 1-perfect codes over finite fields. This construction is a generalization of the Etzion and Vardy construction of full-rank binary 1-perfect codes (1994). Properties of i-components of q-ary Hamming codes are investigated and the construction of full-rank q-ary 1-perfect codes is based on these properties. The switching construction of 1-perfect codes are generalized for the q-ary case. We give a generalization of the concept of i-component of 1-perfect codes and introduce the concept of (i,\sigma)-components of q-ary 1-perfect codes. We also present a generalization of the LindstrÃ¶m and SchÃ¶nheim construction of q-ary 1-perfect codes and provide a lower bound on the number of pairwise distinct q-ary 1-perfect codes of length n.
- In this paper, we describe the properties of the $i$-components of Hamming codes. We suggest constructions of the admissible families of components of Hamming codes. It is shown that every $q$-ary code of length $m$ and minimum distance 5 (for $q = 3$ the minimum distance is 3) can be embedded in a $q$-ary 1-perfect code of length $n = (q^{m}-1)/(q-1)$. It is also shown that every binary code of length $m + k$ and minimum distance $3k + 3$ can be embedded in a binary 1-perfect code of length $n = 2^{m}-1$.