- This paper studies how to efficiently learn an optimal latent variable model online from large streaming data. Latent variable models can explain the observed data in terms of unobserved concepts. They are traditionally studied in the unsupervised learning setting, and learned by iterative methods such as the EM. Very few online learning algorithms for latent variable models have been developed, and the most popular one is online EM. Though online EM is computationally efficient, it typically converges to a local optimum. In this work, we motivate and develop SpectralFPL, a novel algorithm to learn latent variable models online from streaming data. SpectralFPL is computationally efficient, and we prove that it quickly learns the global optimum under a bag-of-words model by deriving an $O(\sqrt n)$ regret bound. Experiment results also demonstrate a consistent performance improvement of SpectralFPL over online EM: in both synthetic and real-world experiments, SpectralFPL's performance is similar with or even better than online EM with optimally tuned parameters.
- Sep 22 2017 cs.CV arXiv:1709.07166v1We present a semi-automated system for sizing nasal Positive Airway Pressure (PAP) masks based upon a neural network model that was trained with facial photographs of both PAP mask users and non-users. It demonstrated an accuracy of 72% in correctly sizing a mask and 96% accuracy sizing to within 1 mask size group. The semi-automated system performed comparably to sizing from manual measurements taken from the same images which produced 89% and 100% accuracy respectively.
- Sep 22 2017 math.CA arXiv:1709.07162v1We prove the boundedness of a class of tri-linear operators consisting of a quasi piece of bilinear Hilbert transform whose scale equals to or dominates the scale of its linear counter part. Such type of operators is motivated by the tri-linear Hilbert transform and its curved versions.
- This paper presents SceneCut, a novel approach to jointly discover previously unseen objects and non-object surfaces using a single RGB-D image. SceneCut's joint reasoning over scene semantics and geometry allows a robot to detect and segment object instances in complex scenes where modern deep learning-based methods either fail to separate object instances, or fail to detect objects that were not seen during training. SceneCut automatically decomposes a scene into meaningful regions which either represent objects or scene surfaces. The decomposition is qualified by an unified energy function over objectness and geometric fitting. We show how this energy function can be optimized efficiently by utilizing hierarchical segmentation trees. Moreover, we leverage a pre-trained convolutional oriented boundary network to predict accurate boundaries from images, which are used to construct high-quality region hierarchies. We evaluate SceneCut on several different indoor environments, and the results show that SceneCut significantly outperforms all the existing methods.
- Sep 22 2017 math.DS arXiv:1709.07157v1We consider the Beris-Edwards model describing nematic liquid crystal dynamics and restrict to a shear flow and spatially homogeneous situation. We analyze the dynamics focusing on the effect of the flow. We show that in the co-rotational case one has gradient dynamics, up to a periodic eigenframe rotation, while in the non-co-rotational case we identify the short and long time regime of the dynamics. We express these in terms of the physical variables and compare with the predictions of other models of liquid crystal dynamics.
- In this paper, we propose a new system design framework for large vocabulary automatic chord estimation. Our approach is based on an integration of traditional sequence segmentation processes and deep learning chord classification techniques. We systematically explore the design space of the proposed framework for a range of parameters, namely deep neural nets, network configurations, input feature representations, segment tiling schemes, and training data sizes. Experimental results show that among the three proposed deep neural nets and a baseline model, the recurrent neural network based system has the best average chord quality accuracy that significantly outperforms the other considered models. Furthermore, our bias-variance analysis has identified a glass ceiling as a potential hindrance to future improvements of large vocabulary automatic chord estimation systems.
- Sep 22 2017 math.FA arXiv:1709.07137v1Motivated by Gordan Žitković's idea of convex compactness for a convex set of a linear topological space, we introduce the concept of $L^0$--convex compactness for an $L^0$--convex set of a topological module over the topological algebra $L^0$, where $L^0$ is the algebra of equivalence classes of real--valued random variables on a given probability space $(\Omega,\mathcal{F},P)$ and endowed with the topology of convergence in probability. This paper continues to develop the theory of $L^0$--convex compactness by establishing various kinds of characterization theorems for $L^0$--convex subsets of a class of important topological modules--random reflexive random normed modules. As applications, we successfully generalize some basic theorems of classical convex optimization and variational inequalities from a convex function on a reflexive Banach space to an $L^0$--convex function on a random reflexive random normed module. Since the usual weak compactness method fails to be valid, we are forced to use the $L^0$--convex compactness method so that a series of new skills can be discovered. These new skills actually also provide a new proof for the corresponding classical case and thus they are of new interest themselves.
- In this paper, we propose a novel recurrent neural network architecture for speech separation. This architecture is constructed by unfolding the iterations of a sequential iterative soft-thresholding algorithm (ISTA) that solves the optimization problem for sparse nonnegative matrix factorization (NMF) of spectrograms. We name this network architecture deep recurrent NMF (DR-NMF). The proposed DR-NMF network has three distinct advantages. First, DR-NMF provides better interpretability than other deep architectures, since the weights correspond to NMF model parameters, even after training. This interpretability also provides principled initializations that enable faster training and convergence to better solutions compared to conventional random initialization. Second, like many deep networks, DR-NMF is an order of magnitude faster at test time than NMF, since computation of the network output only requires evaluating a few layers at each time step. Third, when a limited amount of training data is available, DR-NMF exhibits stronger generalization and separation performance compared to sparse NMF and state-of-the-art long-short term memory (LSTM) networks. When a large amount of training data is available, DR-NMF achieves lower yet competitive separation performance compared to LSTM networks.
- Sep 22 2017 math.GT arXiv:1709.07123v1In this paper, we investigate three geometrical invariants of knots, the height, the trunk and the representativity. First, we give a conterexample for the conjecture which states that the height is additive under connected sum of knots. We also define the minimal height of a knot and give a potential example which has a gap between the height and the minimal height. Next, we show that the representativity is bounded above by a half of the trunk. We also define the trunk of a tangle and show that if a knot has an essential tangle decomposition, then the representativity is bounded above by half of the trunk of either of the two tangles. Finally, we remark on the difference among Gabai's thin position, ordered thin position and minimal critical position. We also give an example of a knot which bounds an essential non-orientable spanning surface, but has arbitrarily large representativity.
- Sep 22 2017 cs.LG arXiv:1709.07116v1Aiming to augment generative models with external memory, we interpret the output of a memory module with stochastic addressing as a conditional mixture distribution, where a read operation corresponds to sampling a discrete memory address and retrieving the corresponding content from memory. This perspective allows us to apply variational inference to memory addressing, which enables effective training of the memory module by using the target information to guide memory lookups. Stochastic addressing is particularly well-suited for generative models as it naturally encourages multimodality which is a prominent aspect of most high-dimensional datasets. Treating the chosen address as a latent variable also allows us to quantify the amount of information gained with a memory lookup and measure the contribution of the memory module to the generative process. To illustrate the advantages of this approach we incorporate it into a variational autoencoder and apply the resulting model to the task of generative few-shot learning. The intuition behind this architecture is that the memory module can pick a relevant template from memory and the continuous part of the model can concentrate on modeling remaining variations. We demonstrate empirically that our model is able to identify and access the relevant memory contents even with hundreds of unseen Omniglot characters in memory
- Sep 22 2017 math.AP arXiv:1709.07115v1In this paper we consider steady vortex flows for the incompressible Euler equations in a planar bounded domain. By solving a variational problem for the vorticity, we construct steady double vortex patches with opposite signs concentrating at a strict local minimum point of the Kirchhoff-Routh function with $k=2$. Moreover we show that such steady solutions are in fact local maximizers of the kinetic energy among isovortical patches, which correlates stability to uniqueness.
- A latent-variable model is introduced for text matching, inferring sentence representations by jointly optimizing generative and discriminative objectives. To alleviate typical optimization challenges in latent-variable models for text, we employ deconvolutional networks as the sequence decoder (generator), providing learned latent codes with more semantic information and better generalization. Our model, trained in an unsupervised manner, yields stronger empirical predictive performance than a decoder based on Long Short-Term Memory (LSTM), with less parameters and considerably faster training. Further, we apply it to text sequence-matching problems. The proposed model significantly outperforms several strong sentence-encoding baselines, especially in the semi-supervised setting.
- Sep 22 2017 cs.CL arXiv:1709.07104v1This paper presents an empirical study of two machine translation-based approaches for Vietnamese diacritic restoration problem, including phrase-based and neural-based machine translation models. This is the first work that applies neural-based machine translation method to this problem and gives a thorough comparison to the phrase-based machine translation method which is the current state-of-the-art method for this problem. On a large dataset, the phrase-based approach has an accuracy of 97.32% while that of the neural-based approach is 96.15%. While the neural-based method has a slightly lower accuracy, it is about twice faster than the phrase-based method in terms of inference speed. Moreover, neural-based machine translation method has much room for future improvement such as incorporating pre-trained word embeddings and collecting more training data.
- Sep 22 2017 cs.CR arXiv:1709.07102v1Modern malware families often rely on domain-generation algorithms (DGAs) to determine rendezvous points to their command-and-control server. Traditional defence strategies (such as blacklisting domains or IP addresses) are inadequate against such techniques due to the large and continuously changing list of domains produced by these algorithms. This paper demonstrates that a machine learning approach based on recurrent neural networks is able to detect domain names generated by DGAs with high precision. The neural models are estimated on a large training set of domains generated by various malwares. Experimental results show that this data-driven approach can detect malware-generated domain names with a F_1 score of 0.971. To put it differently, the model can automatically detect 93 % of malware-generated domain names for a false positive rate of 1:100.
- In recent years, fuzz testing has proven itself to be one of the most effective techniques for finding correctness bugs and security vulnerabilities in practice. One particular fuzz testing tool, American Fuzzy Lop or AFL, has become popular thanks to its ease-of-use and bug-finding power. However, AFL remains limited in the depth of program coverage it achieves, in particular because it does not consider which parts of program inputs should not be mutated in order to maintain deep program coverage. We propose an approach, FairFuzz, that helps alleviate this limitation in two key steps. First, FairFuzz automatically prioritizes inputs exercising rare parts of the program under test. Second, it automatically adjusts the mutation of inputs so that the mutated inputs are more likely to exercise these same rare parts of the program. We conduct evaluation on real-world programs against state-of-the-art versions of AFL, thoroughly repeating experiments to get good measures of variability. We find that on certain benchmarks FairFuzz shows significant coverage increases after 24 hours compared to state-of-the-art versions of AFL, while on others it achieves high program coverage at a significantly faster rate.
- Topological Data Analysis (TDA) is a recent and growing branch of statistics devoted to the study of the shape of the data. In this work we investigate the predictive power of TDA in the context of supervised learning. Since topological summaries, most noticeably the Persistence Diagram, are typically defined in complex spaces, we adopt a kernel approach to translate them into more familiar vector spaces. We define a topological exponential kernel, we characterize it, and we show that, despite not being positive semi-definite, it can be successfully used in regression and classification tasks.
- Operationalizing machine learning based security detections is extremely challenging, especially in a continuously evolving cloud environment. Conventional anomaly detection does not produce satisfactory results for analysts that are investigating security incidents in the cloud. Model evaluation alone presents its own set of problems due to a lack of benchmark datasets. When deploying these detections, we must deal with model compliance, localization, and data silo issues, among many others. We pose the problem of "attack disruption" as a way forward in the security data science space. In this paper, we describe the framework, challenges, and open questions surrounding the successful operationalization of machine learning based security detections in a cloud environment and provide some insights on how we have addressed them.
- In this paper we design and evaluate a Deep-Reinforcement Learning agent that optimizes routing. Our agent adapts automatically to current traffic conditions and proposes tailored configurations that attempt to minimize the network delay. Experiments show very promising performance. Moreover, this approach provides important operational advantages with respect to traditional optimization algorithms.
- We consider image classification with estimated depth. This problem falls into the domain of transfer learning, since we are using a model trained on a set of depth images to generate depth maps (additional features) for use in another classification problem using another disjoint set of images. It's challenging as no direct depth information is provided. Though depth estimation has been well studied, none have attempted to aid image classification with estimated depth. Therefore, we present a way of transferring domain knowledge on depth estimation to a separate image classification task over a disjoint set of train, and test data. We build a RGBD dataset based on RGB dataset and do image classification on it. Then evaluation the performance of neural networks on the RGBD dataset compared to the RGB dataset. From our experiments, the benefit is significant with shallow and deep networks. It improves ResNet-20 by 0.55% and ResNet-56 by 0.53%. Our code and dataset are available publicly.
- Sep 22 2017 math.AP arXiv:1709.07072v1We study the regularity of the solution of the double obstacle problem form for fully non linear parabolic and elliptic operators. We show that when the obstacles are sufficiently regular the solution is $C^{1,\alpha}$ in the interior for both the parabolic and the elliptic cases.
- Sep 22 2017 cs.CV arXiv:1709.07065v1In this paper, we propose a pipeline for multi-target visual tracking under multi-camera system. For multi-camera system tracking problem, efficient data association across cameras, and at the same time, across frames becomes more important than single-camera system tracking. However, most of the multi-camera tracking algorithms emphasis on single camera across frame data association. Thus in our work, we model our tracking problem as a global graph, and adopt Generalized Maximum Multi Clique optimization problem as our core algorithm to take both across frame and across camera data correlation into account all together. Furthermore, in order to compute good similarity scores as the input of our graph model, we extract both appearance and dynamic motion similarities. For appearance feature, Local Maximal Occurrence Representation(LOMO) feature extraction algorithm for ReID is conducted. When it comes to capturing the dynamic information, we build Hankel matrix for each tracklet of target and apply rank estimation with Iterative Hankel Total Least Squares(IHTLS) algorithm to it. We evaluate our tracker on the challenging Terrace Sequences from EPFL CVLAB as well as recently published Duke MTMC dataset.
- Sep 22 2017 stat.ME arXiv:1709.07064v1The Gaussian process is a standard tool for building emulators for both deterministic and stochastic computer experiments. However, application of Gaussian process models is greatly limited in practice, particularly for large-scale and many-input computer experiments that have become typical. We propose a multi-resolution functional ANOVA model as a computationally feasible emulation alternative. More generally, this model can be used for large-scale and many-input non-linear regression problems. An overlapping group lasso approach is used for estimation, ensuring computational feasibility in a large-scale and many-input setting. New results on consistency and inference for the (potentially overlapping) group lasso in a high-dimensional setting are developed and applied to the proposed multi-resolution functional ANOVA model. Importantly, these results allow us to quantify the uncertainty in our predictions. Numerical examples demonstrate that the proposed model enjoys marked computational advantages. Data capabilities, both in terms of sample size and dimension, meet or exceed best available emulation tools while meeting or exceeding emulation accuracy.
- Sep 22 2017 cs.RO arXiv:1709.07051v1We present a distributed algorithm for a swarm of active particles to camouflage in an environment. Each particle is equipped with sensing, computation and communication, allowing the system to take color and gradient information from the environment and self-organize into an appropriate pattern. Current artificial camouflage systems are either limited to static patterns, which are adapted for specific environments, or rely on back-projection, which depend on the viewer's point of view. Inspired by the camouflage abilities of the cuttlefish, we propose a distributed estimation and pattern formation algorithm that allows to quickly adapt to different environments. We present convergence results both in simulation as well as on a swarm of miniature robots "Droplets" for a variety of patterns.
- Sep 22 2017 hep-ph arXiv:1709.07039v1The solution to the Strong CP problem is analysed within the Minimal Flavour Violation (MFV) context. An Abelian factor of the complete flavour symmetry of the fermionic kinetic terms may play the role of the Peccei-Quinn symmetry in traditional axion models. Its spontaneous breaking, due to the addition of a complex scalar field to the Standard Model scalar spectrum, generates the MFV axion, which may redefine away the QCD theta parameter. It differs from the traditional QCD axion for its couplings that are governed by the fermion charges under the axial Abelian symmetry. It is also distinct from the so-called Axiflavon, as the MFV axion does not describe flavour violation, while it does induce flavour non-universality effects. The MFV axion phenomenology is discussed considering astrophysical, collider and flavour data.
- Sep 22 2017 cs.RO arXiv:1709.07033v1We investigate robotic assistants for dressing that can anticipate the motion of the person who is being helped. To this end, we use reinforcement learning to create models of human behavior during assistance with dressing. To explore this kind of interaction, we assume that the robot presents an open sleeve of a hospital gown to a person, and that the person moves their arm into the sleeve. The controller that models the person's behavior is given the position of the end of the sleeve and information about contact between the person's hand and the fabric of the gown. We simulate this system with a human torso model that has realistic joint ranges, a simple robot gripper, and a physics-based cloth model for the gown. Through reinforcement learning (specifically the TRPO algorithm) the system creates a model of human behavior that is capable of placing the arm into the sleeve. We aim to model what humans are capable of doing, rather than what they typically do. We demonstrate successfully trained human behaviors for three robot-assisted dressing strategies: 1) the robot gripper holds the sleeve motionless, 2) the gripper moves the sleeve linearly towards the person from the front, and 3) the gripper moves the sleeve linearly from the side.
- The goal of this paper is to present an end-to-end, data-driven framework to control Autonomous Mobility-on-Demand systems (AMoD, i.e. fleets of self-driving vehicles). We first model the AMoD system using a time-expanded network, and present a formulation that computes the optimal rebalancing strategy (i.e., preemptive repositioning) and the minimum feasible fleet size for a given travel demand. Then, we adapt this formulation to devise a Model Predictive Control (MPC) algorithm that leverages short-term demand forecasts based on historical data to compute rebalancing strategies. We test the end-to-end performance of this controller with a state-of-the-art LSTM neural network to predict customer demand and real customer data from DiDi Chuxing: we show that this approach scales very well for large systems (indeed, the computational complexity of the MPC algorithm does not depend on the number of customers and of vehicles in the system) and outperforms state-of-the-art rebalancing strategies by reducing the mean customer wait time by up to to 89.6%.
- Sep 22 2017 cs.DL astro-ph.SR arXiv:1709.07020v1The arXiv is the most popular preprint repository in the world. Since its inception in 1991, the arXiv has allowed researchers to freely share publication-ready articles prior to formal peer review. The growth and the popularity of the arXiv emerged as a result of new technologies that made document creation and dissemination easy, and cultural practices where collaboration and data sharing were dominant. The arXiv represents a unique place in the history of research communication and the Web itself, however it has arguably changed very little since its creation. Here we look at the strengths and weaknesses of arXiv in an effort to identify what possible improvements can be made based on new technologies not previously available. Based on this, we argue that a modern arXiv might in fact not look at all like the arXiv of today.
- Sep 22 2017 cond-mat.str-el cond-mat.mes-hall arXiv:1709.07013v1Anyons are exotic quasi-particles with fractional charge that can emerge as fundamental excitations of strongly interacting topological quantum phases of matter. Unlike ordinary fermions and bosons, they may obey non-abelian statistics--a property that would help realize fault tolerant quantum computation. Non-abelian anyons have long been predicted to occur in the fractional quantum Hall (FQH) phases that form in two-dimensional electron gases (2DEG) in the presence of a large magnetic field, su ch as the $\nu=\tfrac{5}{2}$ FQH state. However, direct experimental evidence of anyons and tests that can distinguish between abelian and non-abelian quantum ground states with such excitations have remained elusive. Here we propose a new experimental approach to directly visualize the structure of interacting electronic states of FQH states with the scanning tunneling microscope (STM). Our theoretical calculations show how spectroscopy mapping with the STM near individual impurity defects can be used to image fractional statistics in FQH states, identifying unique signatures in such measurements that can distinguish different proposed ground states. The presence of locally trapped anyons should leave distinct signatures in STM spectroscopic maps, and enables a new approach to directly detect - and perhaps ultimately manipulate - these exotic quasi-particles.
- It is demonstrated that fermionic/bosonic symmetry-protected topological (SPT) phases across different dimensions and symmetry classes can be organized using geometric constructions that increase dimensions and symmetry-forgetting maps that change symmetry groups. Specifically, it is shown that the interacting classifications of SPT phases with and without glide symmetry fit into a short exact sequence, so that the classification with glide is constrained to be a direct sum of cyclic groups of order 2 or 4. Applied to fermionic SPT phases in the Wigner-Dyson class AII, this implies that the complete interacting classification in the presence of glide is ${\mathbb Z}_4{\oplus}{\mathbb Z}_2{\oplus}{\mathbb Z}_2$ in 3 dimensions. In particular, the hourglass-fermion phase recently realized in the band insulator KHgSb must be robust to interactions. Generalizations to spatiotemporal glide symmetries are discussed.
- We study the problem of learning description logic (DL) ontologies in Angluin et al.'s framework of exact learning via queries. We admit membership queries ("is a given subsumption entailed by the target ontology?") and equivalence queries ("is a given ontology equivalent to the target ontology?"). We present three main results: (1) ontologies formulated in (two relevant versions of) the description logic DL-Lite can be learned with polynomially many queries of polynomial size; (2) this is not the case for ontologies formulated in the description logic EL, even when only acyclic ontologies are admitted; and (3) ontologies formulated in a fragment of EL related to the web ontology language OWL 2 RL can be learned in polynomial time. We also show that neither membership nor equivalence queries alone are sufficient in cases (1) and (3).
- Although deep Convolutional Neural Network (CNN) has shown better performance in various machine learning tasks, its application is accompanied by a significant increase in storage and computation. Among CNN simplification techniques, parameter pruning is a promising approach which aims at reducing the number of weights of various layers without intensively reducing the original accuracy. In this paper, we propose a novel progressive parameter pruning method, named Structured Probabilistic Pruning (SPP), which efficiently prunes weights of convolutional layers in a probabilistic manner. Unlike existing deterministic pruning approaches, in which the pruned weights of a well-trained model are permanently eliminated, SPP utilizes the relative importance of weights during training iterations, which makes the pruning procedure more accurate by leveraging the accumulated weight importance. Specifically, we introduce an effective weight competition mechanism to emphasize the important weights and gradually undermine the unimportant ones. Experiments indicate that our proposed method has obtained superior performance on ConvNet and AlexNet compared with existing pruning methods. Our pruned AlexNet achieves 4.0 $\sim$ 8.9x (averagely 5.8x) layer-wise speedup in convolutional layers with only 1.3\% top-5 error increase on the ImageNet-2012 validation dataset. We also prove the effectiveness of our method on transfer learning scenarios using AlexNet.
- Sep 22 2017 astro-ph.IM astro-ph.SR arXiv:1709.06993v1PICARD is a scientific space mission dedicated to the study of the solar variability origin. A French micro-satellite will carry an imaging telescope for measuring the solar diameter, limb shape and solar oscillations, and two radiometers for measuring the total solar irradiance and the irradiance in five spectral domains, from ultraviolet to infrared. The mission is planed to be launched in 2009 for a 3-year duration. This article presents the PICARD Payload Data Centre, which role is to collect, process and distribute the PICARD data. The Payload Data Centre is a joint project between laboratories, space agency and industries. The Belgian scientific policy office funds the industrial development and future operations under the European Space Agency program. The development is achieved by the SPACEBEL Company. The Belgian operation centre is in charge of operating the PICARD Payload Data Centre. The French space agency leads the development in partnership with the French scientific research centre, which is responsible for providing all the scientific algorithms. The architecture of the PICARD Payload Data Centre (software and hardware) is presented. The software system is based on a Service Oriented Architecture. The host structure is made up of the basic functions such as data management, task scheduling and system supervision including a graphical interface used by the operator to interact with the system. The other functions are mission-specific: data exchange (acquisition, distribution), data processing (scientific and non-scientific processing) and managing the payload (programming, monitoring). The PICARD Payload Data Centre is planned to be operated for 5 years. All the data will be stored into a specific data centre after this period.
- Social networks and interactions in social media involve both positive and negative relationships. Signed graphs capture both types of relationships: positive edges correspond to pairs of "friends", and negative edges to pairs of "foes". The edge sign prediction problem, that aims to predict whether an interaction between a pair of nodes will be positive or negative, is an important graph mining task for which many heuristics have recently been proposed [Leskovec 2010]. We model the edge sign prediction problem as follows: we are allowed to query any pair of nodes whether they belong to the same cluster or not, but the answer to the query is corrupted with some probability $0<q<\frac{1}{2}$. Let $\delta=1-2q$ be the bias. We provide an algorithm that recovers all signs correctly with high probability in the presence of noise for any constant gap $\delta$ with $O(\frac{n\log n}{\delta^4})$ queries. Our algorithm uses breadth first search as its main algorithmic primitive. A byproduct of our proposed learning algorithm is the use of $s-t$ paths as an informative feature to predict the sign of the edge $(s,t)$. As a heuristic, we use edge disjoint $s-t$ paths of short length as a feature for predicting edge signs in real-world signed networks. Our findings suggest that the use of paths improves the classification accuracy, especially for pairs of nodes with no common neighbors.
- Sep 22 2017 cs.CY arXiv:1709.07387v1Big data analytics has an extremely significant impact on many areas in all businesses and industries including hospitality. This study aims to guide information technology (IT) professionals in hospitality on their big data expedition. In particular, the purpose of this study is to identify the maturity stage of the big data in hospitality industry in an objective way so that hotels be able to understand their progress, and realize what it will take to get to the next stage of big data maturity through the scores they will receive based on the survey.
- Sep 22 2017 astro-ph.HE arXiv:1709.07441v1
- Sep 22 2017 hep-th cond-mat.str-el arXiv:1709.07431v1
- Sep 22 2017 astro-ph.HE hep-ph arXiv:1709.07430v1
- Sep 22 2017 astro-ph.HE arXiv:1709.07428v1
- Sep 22 2017 hep-ph arXiv:1709.07425v1
- Sep 22 2017 astro-ph.GA astro-ph.CO arXiv:1709.07421v1
- Sep 22 2017 math.AG arXiv:1709.07420v1
- Sep 22 2017 physics.atm-clus arXiv:1709.07419v1
- Sep 22 2017 astro-ph.HE arXiv:1709.07418v1
- Sep 22 2017 hep-ph arXiv:1709.07415v1
- Sep 22 2017 math.CO arXiv:1709.07414v1