results for au:Debbio_L in:hep-ph

- We present a determination of the strong coupling constant $\alpha_s(m_Z)$ based on the NNPDF3.1 determination of parton distributions, which for the first time includes constraints from jet production, top-quark pair differential distributions, and the $Z$ $p_T$ distributions using exact NNLO theory. Our result is based on a novel extension of the NNPDF methodology - the correlated replica method - which allows for a simultaneous determination of $\alpha_s$ and the PDFs with all correlations between them fully taken into account. We study in detail all relevant sources of experimental, methodological and theoretical uncertainty. At NNLO we find $\alpha_s(m_Z) = 0.1185 \pm 0.0005^\text{(exp)}\pm 0.0001^\text{(meth)}$, showing that methodological uncertainties are negligible. We conservatively estimate the theoretical uncertainty due to missing higher order QCD corrections (N$^3$LO and beyond) from half the shift between the NLO and NNLO $\alpha_s$ values, finding $\Delta\alpha^{\rm th}_s =0.0011$.
- In the framework of quantum chromodynamics (QCD), parton distribution functions (PDFs) quantify how the momentum and spin of a hadron are divided among its quark and gluon constituents. Two main approaches exist to determine PDFs. The first approach, based on QCD factorization theorems, realizes a QCD analysis of a suitable set of hard-scattering measurements, often using a variety of hadronic observables. The second approach, based on first-principle operator definitions of PDFs, uses lattice QCD to compute directly some PDF-related quantities, such as their moments. Motivated by recent progress in both approaches, in this document we present an overview of lattice-QCD and global-analysis techniques used to determine unpolarized and polarized proton PDFs and their moments. We provide benchmark numbers to validate present and future lattice-QCD calculations and we illustrate how they could be used to reduce the PDF uncertainties in current unpolarized and polarized global analyses. This document represents a first step towards establishing a common language between the two communities, to foster dialogue and to further improve our knowledge of PDFs.
- We present a new set of parton distributions, NNPDF3.1, which updates NNPDF3.0, the first global set of PDFs determined using a methodology validated by a closure test. The update is motivated by recent progress in methodology and available data, and involves both. On the methodological side, we now parametrize and determine the charm PDF alongside the light quarks and gluon ones, thereby increasing from seven to eight the number of independent PDFs. On the data side, we now include the D0 electron and muon W asymmetries from the final Tevatron dataset, the complete LHCb measurements of W and Z production in the forward region at 7 and 8 TeV, and new ATLAS and CMS measurements of inclusive jet and electroweak boson production. We also include for the first time top-quark pair differential distributions and the transverse momentum of the Z bosons from ATLAS and CMS. We investigate the impact of parametrizing charm and provide evidence that the accuracy and stability of the PDFs are thereby improved. We study the impact of the new data by producing a variety of determinations based on reduced datasets. We find that both improvements have a significant impact on the PDFs, with some substantial reductions in uncertainties, but with the new PDFs generally in agreement with the previous set at the one sigma level. The most significant changes are seen in the light-quark flavor separation, and in increased precision in the determination of the gluon. We explore the implications of NNPDF3.1 for LHC phenomenology at Run II, compare with recent LHC measurements at 13 TeV, provide updated predictions for Higgs production cross-sections and discuss the strangeness and charm content of the proton in light of our improved dataset and methodology. The NNPDF3.1 PDFs are delivered for the first time both as Hessian sets, and as optimized Monte Carlo sets with a compressed number of replicas.
- We investigate a recently proposed UV-complete composite Higgs scenario in the light of the first LHC runs. The model is based on a $SU(4)$ gauge group with global flavour symmetry breaking $SU(5) \to SO(5)$, giving rise to pseudo Nambu-Goldstone bosons in addition to the Higgs doublet. This includes a real and a complex electroweak triplet with exotic electric charges. Including these, as well as constraints on other exotic states, we show that LHC measurements are not yet sensitive enough to significantly constrain the model's low energy constants. The Higgs potential is described by two parameters which are on the one hand constrained by the LHC measurement of the Higgs mass and Higgs decay channels and on the other hand can be computed from correlation functions in the UV-complete theory. Hence to exclude the model at least one constant needs to be determined and to validate the Higgs potential both constants need to be reproduced by the UV-theory. Due to its UV-formulation, a certain number of low energy constants can be computed from first principle numerical simulations of the theory formulated on a lattice, which can help in establishing the validity of this model. We assess the potential impact of lattice calculations for phenomenological studies, as a preliminary step towards Monte Carlo simulations.
- Several UV complete models of physics beyond the Standard Model are currently under scrutiny, their low-energy dynamics being compared with the experimental data from the LHC. Lattice simulations can play a role in these studies by providing a first principles computations of the low-energy constants that describe this low-energy dynamics. In this work, we study in detail a specific model recently proposed by Ferretti, and discuss the potential impact of lattice calculations.
- We present results for the decay constants of the $D$ and $D_s$ mesons computed in lattice QCD with $N_f=2+1$ dynamical flavours. The simulations are based on RBC/UKQCD's domain wall ensembles with both physical and unphysical light-quark masses and lattice spacings in the range 0.11--0.07$\,$fm. We employ the domain wall discretisation for all valence quarks. The results in the continuum limit are $f_D=208.7(2.8)_\mathrm{stat}\left(^{+2.1}_{-1.8}\right)_\mathrm{sys}\,\mathrm{MeV}$ and $f_{D_{s}}=246.4(1.3)_\mathrm{stat}\left(^{+1.3}_{-1.9}\right)_\mathrm{sys}\,\mathrm{MeV}$ and $f_{D_s}/f_D=1.1667(77)_\mathrm{stat}\left(^{+57}_{-43}\right)_\mathrm{sys}$. Using these results in a Standard Model analysis we compute the predictions $|V_{cd}|=0.2185(50)_\mathrm{exp}\left(^{+35}_{-37}\right)_\mathrm{lat}$ and $|V_{cs}|=1.011(16)_\mathrm{exp}\left(^{+4}_{-9}\right)_\mathrm{lat}$ for the CKM matrix elements.
- We present results for the leading hadronic contribution to the muon anomalous magnetic moment due to strange quark-connected vacuum polarisation effects. Simulations were performed using RBC--UKQCD's $N_f=2+1$ domain wall fermion ensembles with physical light sea quark masses at two lattice spacings. We consider a large number of analysis scenarios in order to obtain solid estimates for residual systematic effects. Our final result in the continuum limit is $a_\mu^{(2)\,{\rm had},\,s}=53.1(9)\left(^{+1}_{-3}\right)\times10^{-10}$.
- Sep 28 2015 hep-ph arXiv:1509.07853v2Using elementary considerations of Lorentz invariance, Bose symmetry and BRST invariance, we argue why the decay of a massive color-octet vector state into a pair of on-shell massless gluons is possible in a non-Abelian SU(N) Yang-Mills theory, we constrain the form of the amplitude of the process and offer a simple understanding of these results in terms of effective-action operators.
- We present steps towards the computation of the leading-order hadronic contribution to the muon anomalous magnetic moment on RBC/UKQCD physical point DWF ensembles. We discuss several methods for controlling and reducing uncertainties associated to the determination of the HVP form factor.
- We present NNPDF3.0, the first set of parton distribution functions (PDFs) determined with a methodology validated by a closure test. NNPDF3.0 uses a global dataset including HERA-II deep-inelastic inclusive cross-sections, the combined HERA charm data, jet production from ATLAS and CMS, vector boson rapidity and transverse momentum distributions from ATLAS, CMS and LHCb, W+c data from CMS and top quark pair production total cross sections from ATLAS and CMS. Results are based on LO, NLO and NNLO QCD theory and also include electroweak corrections. To validate our methodology, we show that PDFs determined from pseudo-data generated from a known underlying law correctly reproduce the statistical distributions expected on the basis of the assumed experimental uncertainties. This closure test ensures that our methodological uncertainties are negligible in comparison to the generic theoretical and experimental uncertainties of PDF determination. This enables us to determine with confidence PDFs at different perturbative orders and using a variety of experimental datasets ranging from HERA-only up to a global set including the latest LHC results, all using precisely the same validated methodology. We explore some of the phenomenological implications of our results for the upcoming 13 TeV Run of the LHC, in particular for Higgs production cross-sections.
- The renormalized next-to-leading-order (NLO) chiral low-energy constant, $L_{10}^r$, is determined in a complete next-to-next-to-leading-order (NNLO) analysis, using a combination of lattice and continuum data for the flavor $ud$ $V-A$ correlator and results from a recent chiral sum-rule analysis of the flavor-breaking combination of $ud$ and $us$ $V-A$ correlator differences. The analysis also fixes two combinations of NNLO low-energy constants, the determination of which is crucial to the precision achieved for $L_{10}^r$. Using the results of the flavor-breaking chiral $V-A$ sum rule obtained with current versions of the strange hadronic $\tau$ branching fractions as input, we find $L_{10}^r(m_\rho )\, =\, -0.00346(32)$. This result represents the first NNLO determination of $L_{10}^r$ having all inputs under full theoretical and/or experimental control, and the best current precision for this quantity.
- MCgrid is a software package that provides access to the APPLgrid interpolation tool for Monte Carlo event generator codes, allowing for fast and flexible variations of scales, coupling parameters and PDFs in cutting edge leading and next-to-leading-order QCD calculations. This is achieved by providing additional tools to the Rivet analysis system for the construction of MCgrid enhanced Rivet analyses. The interface is based around a one-to-one correspondence between a Rivet histogram class and a wrapper for an APPLgrid interpolation grid. The Rivet system provides all of the analysis tools required to project a Monte Carlo weight upon an observable bin, and the MCgrid package provides the correct conversion of the event weight to an APPLgrid ?fill call. MCgrid has been tested and designed for use with the SHERPA event generator, however as with Rivet the package is suitable for use with any code which can produce events in the HepMC event record format.
- Recent analyses of flavor-breaking hadronic-$\tau$-decay-based sum rules produce values of $\vert V_{us}\vert$ $\sim 3\sigma$ low compared to 3-family unitarity expectations. An unresolved systematic issue is the significant variation in $\vert V_{us}\vert$ produced by different prescriptions for treating the slowly converging $D=2$ OPE series. We investigate the reliability of these prescriptions using lattice data for various flavor-breaking correlators and show the fixed-scale prescription is clearly preferred. Preliminary updates of the conventional $\tau$-based, and related mixed $\tau$-electroproduction-data-based, sum rule analyses incorporating B-factory results for low-multiplicity strange $\tau$ decay mode distributions are then performed. Use of the preferred FOPT $D=2$ OPE prescription is shown to significantly reduce the discrepancy between 3-family unitarity expectations and the sum rule results.
- A combination of lattice and continuum data for the light-quark V-A correlator, supplemented by results from a chiral sum-rule analysis of the flavor-breaking flavor $ud$-$us$ V-A correlator difference, is shown to make possible a high-precision NNLO determination of the renormalized NLO chiral low-energy constant $L_{10}^r$. Key to this determination is the ability to simultaneously fix the two combinations of NNLO low-energy constants also entering the analysis. With current versions of the strange hadronic $\tau$ branching fractions required as input to the flavor-breaking V-A sum rule, we find $L_{10}^r(m_\rho ) = -0.00346(29)$. This represents both the best current precision for $L_{10}^r$, and the first NNLO determination having all errors under full control.
- Aug 05 2013 hep-ph arXiv:1308.0598v2We present a set of parton distribution functions (PDFs), based on the NNPDF2.3 set, which includes a photon PDF, and QED contributions to parton evolution. We describe the implementation of the combined QCD+QED evolution in the NNPDF framework. We then provide a first determination of the full set of PDFs based on deep-inelastic scattering data and LHC data for W and Z/gamma* Drell-Yan production, using leading-order QED and NLO or NNLO QCD. We compare the ensuing NNPDF2.3QED PDF set to the older MRST2004QED set. We perform a preliminary investigation of the phenomenological implications of NNPDF2.3QED: specifically, photon-induced corrections to direct photon production at HERA, and high-mass dilepton and W pair production at the LHC.
- The scaling laws in an infrared conformal (IR) theory are dictated by the critical exponents of relevant operators. We have investigated these scaling laws at leading order in two previous papers. In this work we investigate further consequences of the scaling laws, trying to identify potential signatures that could be studied by lattice simulations. From the first derivative of the form factor we derive the behaviour of the mean charge radius of the hadronic states in the theory. We obtain $\langle r_H^2 \rangle \sim m^{-2/(1+\gamma^*_m)}$ which is consistent with $\langle r_H^2 \rangle \sim 1/M_H^{2}$. The mean charge radius can be used as an alternative observable to assess the size of the physical states, and hence finite size effects, in numerical simulations. Furthermore, we discuss the behaviour of specific field correlators in coordinate space for the case of conformal, scale-invariant, and confining theories making use of selection rules in scaling dimensions and spin. We compute the scaling corrections to correlations functions by linearizing the renormalization group equations. We find that these correction are potentially large close to the edge of the conformal window. As an application we compute the scaling correction to the formula $M_H \sim m^{1/(1+\gamma_m^*)}$ directly through its associated correlator as well as through the trace anomaly. The two computations are shown to be equivalent through a generalisation of the Feynman-Hellmann theorem for the fermion mass, and the gauge coupling.
- We show that the logarithmic derivative of the gauge coupling on the hadronic mass and the cosmological constant term of a gauge theory are related to the gluon condensate of the hadron and the vacuum respectively. These relations are akin to Feynman-Hellmann relations whose derivation for the case at hand are complicated by the construction of the gauge theory Hamiltonian. We bypass this problem by using a renormalisation group equation for composite operators and the trace anomaly. The relations serve as possible definitions of the gluon condensates themselves which are plagued in direct approaches by power divergences. In turn these results might help to determine the contribution of the QCD phase transition to the cosmological constant and test speculative ideas.
- We present the results of a systematic, first-principles study of the spectrum and decay constants of mesons for different numbers of color charges N, via lattice computations. We restrict our attention to states in the non-zero isospin sector, evaluating the masses associated with the ground-state and first excitation in the pseudoscalar, vector, scalar, and axial vector channels. Our results are based on a new set of simulations of four dimensional SU(N) Yang-Mills theories with the number of colors ranging from N=2 to N=17; the spectra and the decay constants are computed in the quenched approximation (which becomes exact in the 't Hooft limit) using Wilson fermions. After discussing the extrapolations to the chiral and large-N limits, we present a comparison of our results to some of the numerical computations and analytical predictions available in the literature - including, in particular, those from holographic computations.
- Mar 07 2013 hep-ph arXiv:1303.1189v2We study several sources of theoretical uncertainty in the determination of parton distributions (PDFs) which may affect current PDF sets used for precision physics at the Large Hadron Collider, and explain discrepancies between them. We consider in particular the use of fixed-flavor versus variable-flavor number renormalization schemes, higher twist corrections, and nuclear corrections. We perform our study in the framework of the NNPDF2.3 global PDF determination, by quantifying in each case the impact of different theoretical assumptions on the output PDFs. We also study in each case the implications for benchmark cross sections at the LHC. We find that the impact in a global fit of a fixed-flavor number scheme is substantial, the impact of higher twists is negligible, and the impact of nuclear corrections is moderate and circumscribed.
- We present lattice results on the meson spectrum and decay constants in large-N QCD. The results are obtained in the quenched approximation for N = 2, 3, 4, 5, 6, 7 and 17 and extrapolated to infinite N.
- Recent sum rule determinations of |V_us|, employing flavor-breaking combinations of hadronic tau decay data, are significantly lower than either expectations based on 3-family unitarity or determinations from K_ell3 and Gamma[K_mu2]/Gamma[pi_mu2]. We use lattice data to investigate the accuracy/reliability of the OPE representation of the flavor-breaking correlator combination entering the tau decay analyses. The behavior of an alternate correlator combination, constructed to reduce problems associated with the slow convergence of the D = 2 OPE series, and entering an alternate sum rule requiring both electroproduction cross-section and hadronic tau decay data, is also investigated. Preliminary updates of both analyses, with the lessons learned from the lattice data in mind, are also presented.
- We present preliminary results on extractions of the chiral LECs L_10 and C_87 and constraints on the excited pseudoscalar state pi(1300) and pi(1800) decay constants obtained from an analysis of lattice data for the flavor ud light quark V-A correlator. A comparison of the results for the correlator to the corresponding mildly-model-dependent continuum results (based primarily on experimental hadronic tau decay data) is also given
- We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross sections and differential distributions for electroweak boson and jet production in the cases in which the experimental covariance matrix is available. We quantify the agreement between data and theory by computing the chi2 for each data set with all the various PDFs. PDF comparisons are performed consistently for common values of the strong coupling. We also present a benchmark comparison of jet production at the LHC, comparing the results from various available codes and scale settings. Finally, we discuss the implications of the updated NNLO PDF sets for the combined PDF+alphaS uncertainty in the gluon fusion Higgs production cross section.
- The phase diagram of five-dimensional SU(2) gauge theories with one compactified dimension on anisotropic lattices has a rich structure. In this contribution we show how to control non-perturbatively the scale hierarchy between the cut-off and the compactification scale in the bare parameter space. There exists a set of strong bare couplings where the the five-dimensional lattice theory can be described by an effective four-dimensional theory with a scalar field in the adjoint representation. We present a detailed study of the light scalar spectrum as it arises from the non-perturbative dynamics of the full five-dimensional lattice theory. We also investigate the mixing with scalar glueball states in the attempt to further establish the extra-dimensional nature of light scalar states.
- Jul 06 2012 hep-ph arXiv:1207.1303v2We present the first determination of parton distributions of the nucleon at NLO and NNLO based on a global data set which includes LHC data: NNPDF2.3. Our data set includes, besides the deep inelastic, Drell-Yan, gauge boson production and jet data already used in previous global PDF determinations, all the relevant LHC data for which experimental systematic uncertainties are currently available: ATLAS and LHCb W and Z lepton rapidity distributions from the 2010 run, CMS W electron asymmetry data from the 2011 run, and ATLAS inclusive jet cross-sections from the 2010 run. We introduce an improved implementation of the FastKernel method which allows us to fit to this extended data set, and also to adopt a more effective minimization methodology. We present the NNPDF2.3 PDF sets, and compare them to the NNPDF2.1 sets to assess the impact of the LHC data. We find that all the LHC data are broadly consistent with each other and with all the older data sets included in the fit. We present predictions for various standard candle cross-sections, and compare them to those obtained previously using NNPDF2.1, and specifically discuss the impact of ATLAS electroweak data on the determination of the strangeness fraction of the proton. We also present collider PDF sets, constructed using only data from HERA, Tevatron and LHC, but find that this data set is neither precise nor complete enough for a competitive PDF determination.
- The low-energy dynamics of five-dimensional Yang-Mills theories compactified on S^1 can be described by a four-dimensional gauge theory coupled to a scalar field in the adjoint representation of the gauge group. Perturbative calculations suggest that the mass of this elementary scalar field is protected against power divergences, and is controlled by the size of the extra dimension R. As a first step in the study of this phenomenon beyond perturbation theory, we investigate the phase diagram of a SU(2) Yang-Mills theory in five dimensions regularized on anisotropic lattices and we determine the ratios of the relevant physical scales. The lattice system shows a dimensionally reduced phase where the four-dimensional correlation length is much larger than the size of the extra dimension, but still smaller than the four-dimensional volume. In this region of the bare parameter space, at energies below 1/R, the non-perturbative spectrum contains a \emphlight scalar state. This state has a mass that is independent of the cut-off, and a small overlap with glueball operators. Our results suggest that light scalar fields can be introduced in a lattice theory using compactified extra dimensions, rather than fine tuning the bare mass parameter.
- Oct 12 2011 hep-ph arXiv:1110.2483v2We determine the strong coupling alpha_s at NNLO in perturbative QCD using the global dataset input to the NNPDF2.1 NNLO parton fit: data from neutral and charged current deep-inelastic scattering, Drell-Yan, vector boson production and inclusive jets. We find alpha_s(M_Z)=0.1173+- 0.0007 (stat), where the statistical uncertainty comes from the underlying data and uncertainties due to the analysis procedure are negligible. We show that the distribution of alpha_s values preferred by different experiments in the global fit is statistically consistent, without need for rescaling uncertainties by a "tolerance" factor. We show that if deep-inelastic data only are used, the best-fit value of alpha_s is somewhat lower, but consistent within one sigma with the global determination. We estimate the dominant theoretical uncertainty, from higher orders corrections, to be Delta alpha_s (pert) ~ 0.0009.
- Oct 11 2011 hep-ph physics.data-an arXiv:1110.1863v1We discuss the statistical properties of parton distributions within the framework of the NNPDF methodology. We present various tests of statistical consistency, in particular that the distribution of results does not depend on the underlying parametrization and that it behaves according to Bayes' theorem upon the addition of new data. We then study the dependence of results on consistent or inconsistent datasets and present tools to assess the consistency of new data. Finally we estimate the relative size of the PDF uncertainty due to data uncertainties, and that due to the need to infer a functional form from a finite set of data.
- We present a Monte Carlo Renormalisation Group (MCRG) study of the SU(2) gauge theory with two Dirac fermions in the adjoint representation. Using the two-lattice matching technique we measure the running of the coupling and the anomalous mass dimension. We find slow running of the coupling, compatible with an infrared fixed point. Assuming this running is negligible we find a vanishing anomalous dimension, gamma=-0.03(13), however without this assumption our uncertainty in the running of the coupling leads to a much larger range of allowed values, -0.6 < gamma < 0.6. We discuss the systematic errors affecting the current analysis and possible improvements.
- We present a Monte Carlo renormalisation group study of the SU(2) gauge theory with two Dirac fermions in the adjoint representation. Using the two-lattice matching technique we measure the running of the coupling and the anomalous mass dimension. We find slow running of the coupling, compatible with an infrared fixed point. Assuming this running is negligible we find a vanishing anomalous dimension, gamma=-0.03(13), however taking this source of systematic error into account gives a much larger range of allowed values, -0.6 < gamma < 0.6. We also attempt to measure the anomalous mass dimension using the stability matrix method. We discuss the systematic errors affecting the current analysis and possible improvements.
- We develop in more detail our reweighting method for incorporating new datasets in parton fits based on a Monte Carlo representation of PDFs. After revisiting the derivation of the reweighting formula, we show how to construct an unweighted PDF replica set which is statistically equivalent to a given reweighted set. We then use reweighting followed by unweighting to test the consistency of the method, specifically by verifying that results do not depend on the order in which new data are included in the fit via reweighting. We apply the reweighting method to study the impact of LHC W lepton asymmetry data on the NNPDF2.1 set. We show how these data reduce the PDF uncertainties of light quarks in the medium and small x region, providing the first solid constraints on PDFs from LHC data.
- Jul 14 2011 hep-ph arXiv:1107.2652v4We present a determination of the parton distributions of the nucleon from a global set of hard scattering data using the NNPDF methodology at LO and NNLO in perturbative QCD, thereby generalizing to these orders the NNPDF2.1 NLO parton set. Heavy quark masses are included using the so-called FONLL method, which is benchmarked here at NNLO. We demonstrate the stability of PDFs upon inclusion of NNLO corrections, and we investigate the convergence of the perturbative expansion by comparing LO, NLO and NNLO results. We show that the momentum sum rule can be tested with increasing accuracy at LO, NLO and NNLO. We discuss the impact of NNLO corrections on collider phenomenology, specifically by comparing to recent LHC data. We present PDF determinations using a range of values of alpha_s, m_c and m_b. We also present PDF determinations based on various subsets of the global dataset, show that they generally lead to less accurate phenomenology, and discuss the possibility of future PDF determinations based on collider data only.
- We provide a pedagogical introduction to extensions of the Standard Model in which the Higgs is composite. These extensions are known as models of dynamical electroweak symmetry breaking or, in brief, Technicolor. Material covered includes: motivations for Technicolor, the construction of underlying gauge theories leading to minimal models of Technicolor, the comparison with electroweak precision data, the low energy effective theory, the spectrum of the states common to most of the Technicolor models, the decays of the composite particles and the experimental signals at the Large Hadron Collider. The level of the presentation is aimed at readers familiar with the Standard Model but who have little or no prior exposure to Technicolor. Several extensions of the Standard Model featuring a composite Higgs can be reduced to the effective Lagrangian introduced in the text. We establish the relevant experimental benchmarks for Vanilla, Running, Walking, and Custodial Technicolor, and a natural fourth family of leptons, by laying out the framework to discover these models at the Large Hadron Collider.
- Mar 15 2011 hep-ph arXiv:1103.2369v2We determine the strong coupling alpha_s from a next-to-leading order analysis of processes used for the NNPDF2.1 parton determination, which includes data from neutral and charged current deep-inelastic scattering, Drell-Yan and inclusive jet production. We find alpha_s(M_Z)=0.1191+-0.0006 (exp), where the uncertainty includes all statistical and systematic experimental uncertainties, but not purely theoretical uncertainties. We study the dependence of the results on the dataset, by providing further determinations based respectively on deep-inelastic data only, and on HERA data only. The deep-inelastic fit gives the consistent result alpha_s(M_Z)=0.1177+-0.0009(exp), but the result of the HERA-only fit is only marginally consistent. We provide evidence that individual data subsets can have runaway directions due to poorly determined PDFs, thus suggesting that a global dataset is necessary for a reliable determination.
- We discuss the impact of the treatment of NMC structure function data on parton distributions in the context of the NNPDF2.1 global PDF determination at NLO and NNLO. We show that the way these data are treated, and even their complete removal, has no effect on parton distributions at NLO, and at NNLO has an effect which is below one sigma. In particular, the Higgs production cross-section in the gluon fusion channel is very stable.
- We present a determination of the parton distributions of the nucleon from a global set of hard scattering data using the NNPDF methodology including heavy quark mass effects: NNPDF2.1. In comparison to the previous NNPDF2.0 parton determination, the dataset is enlarged to include deep--inelastic charm structure function data. We implement the FONLL-A general-mass scheme in the FastKernel framework and assess its accuracy by comparison to the Les Houches heavy quark benchmarks. We discuss the impact on parton distributions of the treatment of the heavy quark masses, and we provide a determination of the uncertainty in the parton distributions due to uncertainty in the masses. We assess the impact of these uncertainties on LHC observables by providing parton sets with different values of the charm and bottom quark masses. Finally, we construct and discuss parton sets with a fixed number of flavours.
- This document is intended as a study of benchmark cross sections at the LHC (at 7 TeV) at NLO using modern parton distribution functions currently available from the 6 PDF fitting groups that have participated in this exercise. It also contains a succinct user guide to the computation of PDFs, uncertainties and correlations using available PDF sets. A companion note, also submitted to the archive, provides an interim summary of the current recommendations of the PDF4LHC working group for the use of parton distribution functions and of PDF uncertainties at the LHC, for cross section and cross section uncertainty calculations.
- We present a method for incorporating the information contained in new datasets into an existing set of parton distribution functions without the need for refitting. The method involves reweighting the ensemble of parton densities through the computation of the chi-square to the new dataset. We explain how reweighting may be used to assess the impact of any new data or pseudodata on parton densities and thus on their predictions. We show that the method works by considering the addition of inclusive jet data to a DIS+DY fit, and comparing to the refitted distribution. We then use reweighting to determine the impact of recent high statistics lepton asymmetry data from the D0 experiment on the NNPDF2.0 parton set. We find that the D0 inclusive muon and electron data are perfectly compatible with the rest of the data included in the NNPDF2.0 analysis and impose additional constraints on the large-x d/u ratio. The more exclusive D0 electron datasets are however inconsistent both with the other datasets and among themselves, suggesting that here the experimental uncertainties have been underestimated.
- We present a Monte Carlo renormalisation group study of the SU(2) gauge theory with two Dirac fermions in the adjoint representation. Using the two lattice matching technique recently advocated and exploited in [arXiv:0907.0919], we measure the running of the coupling and the anomalous mass dimension.
- We simulate SU(2) gauge theory with six massless fundamental Dirac fermions. By using the Schrödinger Functional method we measure the running of the coupling and the fermion mass over a wide range of length scales. We observe very slow running of the coupling and construct an estimator for the fermion mass anomalous dimension giving $0.135 <\gamma< 1.03$ in the region compatible with an IR fixed point.
- We consider mass-deformed conformal gauge theories (mCGT) and investigate the scaling behaviour of hadronic observables as a function of the fermion mass. Applying renormalization group arguments directly to matrix elements, we find m_H ~ m^1/(1+gamma*) and F ~ m^\eta_F(gamma*) for the decay constants, thereby generalizing our results from a previous paper to the entire spectrum. We derive the scaling law m_H m̃^1/(1+gamma*) using the Hellmann-Feynman theorem, and thus provide a derivation which does not rely on renormalization group arguments. Using the new results we reiterate, on the phenomenologically important, S-parameter. Finally, we discuss how spectral representations can be used to relate the mass and decay constant trajectories.
- We simulate SU(2) gauge theory with six massless fundamental Dirac fermions. We measure the running of the coupling and the mass in the Schroedinger Functional scheme. We observe very slow running of the coupling constant. We measure the mass anomalous dimension gamma, and find it is between 0.135 and 1.03 in the range of couplings consistent with the existence of an IR fixed point.
- We discuss the implementation of the FONLL general-mass scheme for heavy quarks in deep-inelastic scattering in the FastKernel framework, used in the NNPDF series of global PDF analysis. We present the general features of FONLL and benchmark the accuracy of its implementation in FastKernel comparing with the Les Houches heavy quark benchmark tables. We then show preliminary results of the NNPDF2.1 analysis, in which heavy quark mass effects are included following the FONLL-A GM scheme.
- We review recent progress towards a determination of a set of polarized parton distributions from a global set of deep-inelastic scattering data based on the NNPDF methodology, in analogy with the unpolarized case. This method is designed to provide a faithful and statistically sound representation of parton distributions and their uncertainties. We show how the FastKernel method provides a fast and accurate method for solving the polarized DGLAP equations. We discuss the polarized PDF parametrizations and the physical constraints which can be imposed. Preliminary results suggest that the uncertainty on polarized PDFs, most notably the gluon, has been underestimated in previous studies.
- We present a number of analytical results which should guide the interpretation of lattice data in theories with an infra-red fixed point (IRFP) deformed by a mass term deltaL = - m \bar qq. From renormalization group (RG) arguments we obtain the leading scaling exponent, F ~ m^(eta_F), for all decay constants of the lowest lying states other than the ones affected by the chiral anomaly and the tensor ones. These scaling relations provide a clear cut way to distinguish a theory with an IRFP from a confining theory with heavy fermions. Moreover, we present a derivation relating the scaling of <\bar qq> ∼m^(eta_qq) to the scaling of the density of eigenvalues of the massless Dirac operator rho(lambda) ~ lambda^(eta_qq) RG arguments yield eta_qq = (3-gamma*)/(1+\gamma*)$ as a function of the mass anomalous dimension gamma* at the IRFP. The arguments can be generalized to other condensates such as <G^2> ~ m^(4/(1+gamma*)). We describe a heuristic derivation of the result on the condensates, which provides interesting connections between different approaches. Our results are compared with existing data from numerical studies of SU(2) with two adjoint Dirac fermions.
- May 04 2010 hep-ph arXiv:1005.0397v1We present predictions for relevant LHC observables obtained with the NNPDF2.0 set. We compute the combined PDFs uncertainties on these observables, and show that combining errors in quadrature yields an excellent approximation to exact error propagation. We then compare the NNPDF2.0 results to the other global PDF fits using a common value of $\alpha_s$. At LHC 7 TeV, reasonable agreement, both in central values and in uncertainties, is found for NNPDF2.0, CTEQ6.6 and MSTW08.
- We study the gauge sector of Minimal Walking Technicolor, which is an SU(2) gauge theory with nf=2 flavors of Wilson fermions in the adjoint representation. Numerical simulations are performed on lattices Nt x Ns^3, with Ns ranging from 8 to 16 and Nt=2Ns, at fixed \beta=2.25, and varying the fermion bare mass m0, so that our numerical results cover the full range of fermion masses from the quenched region to the chiral limit. We present results for the string tension and the glueball spectrum. A comparison of mesonic and gluonic observables leads to the conclusion that the infrared dynamics is given by an SU(2) pure Yang-Mills theory with a typical energy scale for the spectrum sliding to zero with the fermion mass. The typical mesonic mass scale is proportional to, and much larger than this gluonic scale. Our findings are compatible with a scenario in which the massless theory is conformal in the infrared. An analysis of the scaling of the string tension with the fermion mass towards the massless limit allows us to extract the chiral condensate anomalous dimension \gamma*, which is found to be \gamma*=0.22+-0.06.
- We investigate the structure and the novel emerging features of the mesonic non-singlet spectrum of the Minimal Walking Technicolor (MWT) theory. Precision measurements in the nonsinglet pseudoscalar and vector channels are compared to the expectations for an IR-conformal field theory and a QCD-like theory. Our results favor a scenario in which MWT is (almost) conformal in the infrared, while spontaneous chiral symmetry breaking seems less plausible.
- Mar 08 2010 hep-ph arXiv:1003.1241v1This report summarizes the activities of the SM and NLO Multileg Working Group of the Workshop "Physics at TeV Colliders", Les Houches, France 8-26 June, 2009.
- We present a determination of the parton distributions of the nucleon from a global set of hard scattering data using the NNPDF methodology: NNPDF2.0. Experimental data include deep-inelastic scattering with the combined HERA-I dataset, fixed target Drell-Yan production, collider weak boson production and inclusive jet production. Next-to-leading order QCD is used throughout without resorting to K-factors. We present and utilize an improved fast algorithm for the solution of evolution equations and the computation of general hadronic processes. We introduce improved techniques for the training of the neural networks which are used as parton parametrization, and we use a novel approach for the proper treatment of normalization uncertainties. We assess quantitatively the impact of individual datasets on PDFs. We find very good consistency of all datasets with each other and with NLO QCD, with no evidence of tension between datasets. Some PDF combinations relevant for LHC observables turn out to be determined rather more accurately than in any other parton fit.
- We consider the generic problem of performing a global fit to many independent data sets each with a different overall multiplicative normalization uncertainty. We show that the methods in common use to treat multiplicative uncertainties lead to systematic biases. We develop a method which is unbiased, based on a self--consistent iterative procedure. We demonstrate the use of this method by applying it to the determination of parton distribution functions with the NNPDF methodology, which uses a Monte Carlo method for uncertainty estimation.
- We study SU(2) lattice gauge theory with two flavours of Dirac fermions in the adjoint representation. We measure the running of the coupling in the Schroedinger Functional (SF) scheme and find it is consistent with existing results. We discuss how systematic errors affect the evidence for an infrared fixed point (IRFP). We present the first measurement of the running of the mass in the SF scheme. The anomalous dimension of the chiral condensate, which is relevant for phenomenological applications, can be easily extracted from the running of the mass, under the assumption that the theory has an IRFP. At the current level of accuracy, we can estimate 0.05 < gamma < 0.56 at the IRFP.
- We simulate SU(2) gauge theory with two massless Dirac fermions in the adjoint representation. We calculate the running of the Schroedinger Functional coupling and the renormalised quark mass over a wide range of length scales. The running of the coupling is consistent with the existence of an infrared fixed point (IRFP), and we find 0.07 < gamma < 0.56 at the IRFP, depending on the value of the critical coupling.
- Jul 16 2009 hep-ph arXiv:0907.2506v1Determinations of structure functions and parton distribution functions have been recently obtained using Monte Carlo methods and neural networks as universal, unbiased interpolants for the unknown functional dependence. In this work the same methods are applied to obtain a parametrization of polarized Deep Inelastic Scattering (DIS) structure functions. The Monte Carlo approach provides a bias--free determination of the probability measure in the space of structure functions, while retaining all the information on experimental errors and correlations. In particular the error on the data is propagated into an error on the structure functions that has a clear statistical meaning. We present the application of this method to the parametrization from polarized DIS data of the photon asymmetries $A_1^p$ and $A_1^d$ from which we determine the structure functions $g_1^p(x,Q^2)$ and $g_1^d(x,Q^2)$, and discuss the possibility to extract physical parameters from these parametrizations. This work can be used as a starting point for the determination of polarized parton distributions.
- Jun 11 2009 hep-ph arXiv:0906.1958v2We use recent neutrino dimuon production data combined with a global deep-inelastic parton fit to construct a new parton set, NNPDF1.2, which includes a determination of the strange and antistrange distributions of the nucleon. The result is characterized by a faithful estimation of uncertainties thanks to the use of the NNPDF methodology, and is free of model or theoretical assumptions other than the use of NLO perturbative QCD and exact sum rules. Better control of the uncertainties of the strange and antistrange parton distributions allows us to reassess the determination of electroweak parameters from the NuTeV dimuon data. We perform a direct determination of the |V_cd| and |V_cs| CKM matrix elements, obtaining central values in agreement with the current global CKM fit: specifically we find |V_cd|=0.244\pm 0.019 and |V_cs|=0.96\pm 0.07. Our result for |V_cs| is more precise than any previous direct determination. We also reassess the uncertainty on the NuTeV determination of \sin^2\theta_W through the Paschos-Wolfenstein relation: we find that the very large uncertainties in the strange valence momentum fraction are sufficient to bring the NuTeV result into complete agreement with the results from precision electroweak data.
- Mar 24 2009 hep-ph arXiv:0903.3861v22nd workshop on the implications of HERA for LHC physics. Working groups: Parton Density Functions Multi-jet final states and energy flows Heavy quarks (charm and beauty) Diffraction Cosmic Rays Monte Carlos and Tools
- Jan 19 2009 hep-ph arXiv:0901.2504v2We provide an assessment of the state of the art in various issues related to experimental measurements, phenomenological methods and theoretical results relevant for the determination of parton distribution functions (PDFs) and their uncertainties, with the specific aim of providing benchmarks of different existing approaches and results in view of their application to physics at the LHC. We discuss higher order corrections, we review and compare different approaches to small x resummation, and we assess the possible relevance of parton saturation in the determination of PDFS at HERA and its possible study in LHC processes. We provide various benchmarks of PDF fits, with the specific aim of studying issues of error propagation, non-gaussian uncertainties, choice of functional forms of PDFs, and combination of data from different experiments and different processes. We study the impact of combined HERA (ZEUS-H1) structure function data, their impact on PDF uncertainties, and their implications for the computation of standard candle processes, and we review the recent F_L determination at HERA. Finally, we compare and assess methods for luminosity measurements at the LHC and the impact of PDFs on them.
- Nov 17 2008 hep-ph arXiv:0811.2288v1We present recent progress within the NNPDF parton analysis framework. After a brief review of the results from the DIS NNPDF analysis, NNPDF1.0, we discuss results from an updated analysis with independent parametrizations for the strange and anti-strange distributions, denoted by NNPDF1.1. We examine the phenomenological implications of this improved analysis for the strange PDFs.
- Aug 11 2008 hep-ph arXiv:0808.1231v4We present the determination of a set of parton distributions of the nucleon, at next-to-leading order, from a global set of deep-inelastic scattering data: NNPDF1.0. The determination is based on a Monte Carlo approach, with neural networks used as unbiased interpolants. This method, previously discussed by us and applied to a determination of the nonsinglet quark distribution, is designed to provide a faithful and statistically sound representation of the uncertainty on parton distributions. We discuss our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the evolution equations and its benchmarking, and the method used to compute physical observables. We discuss the parametrization and fitting of neural networks, and the algorithm used to determine the optimal fit. We finally present our set of parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent parton sets. We use it to compute the benchmark W and Z cross sections at the LHC. We discuss issues of delivery and interfacing to commonly used packages such as LHAPDF.
- Jul 01 2008 hep-ph arXiv:0806.4918v1We present a fit of the virtual-photon scattering asymmetry of polarized Deep Inelastic Scattering which combines a Monte Carlo technique with the use of a redundant parametrization based on Neural Networks. We apply the result to the analysis of CLAS data on a polarized proton target.
- May 21 2008 hep-ph arXiv:0805.3100v1We present recent results of the NNPDF collaboration on a full DIS analysis of Parton Distribution Functions (PDFs). Our method is based on the idea of combining a Monte Carlo sampling of the probability measure in the space of PDFs with the use of neural networks as unbiased universal interpolating functions. The general structure of the project and the features of the fit are described and compared to those of the traditional approaches.
- We compute the masses of the $\pi$ and of the $\rho$ mesons in the quenched approximation on a lattice with fixed lattice spacing $a \simeq 0.145 \ \mathrm{fm}$ for SU($N$) gauge theory with $N = 2,3,4,6$. We find that a simple linear expression in $1/N^2$ correctly captures the features of the lowest-lying meson states at those values of $N$. This enables us to extrapolate to $N = \infty$ the behaviour of $m_{\pi}$ as a function of the quark mass and of $m_{\rho}$ as a function of $m_{\pi}$. Our results for the latter agree within 5% with recent predictions obtained in the AdS/CFT framework.
- Jun 15 2007 hep-ph arXiv:0706.2130v1We give a status report on the determination of a set of parton distributions based on neural networks. In particular, we summarize the determination of the nonsinglet quark distribution up to NNLO, we compare it with results obtained using other approaches, and we discuss its use for a determination of $\alpha_s$.
- Jan 17 2007 hep-ph arXiv:hep-ph/0701127v1We provide a determination of the isotriplet quark distribution from available deep--inelastic data using neural networks. We give a general introduction to the neural network approach to parton distributions, which provides a solution to the problem of constructing a faithful and unbiased probability distribution of parton densities based on available experimental information. We discuss in detail the techniques which are necessary in order to construct a Monte Carlo representation of the data, to construct and evolve neural parton distributions, and to train them in such a way that the correct statistical features of the data are reproduced. We present the results of the application of this method to the determination of the nonsinglet quark distribution up to next--to--next--to--leading order, and compare them with those obtained using other approaches.
- Recent conceptual, algorithmic and technical advances allow numerical simulations of lattice QCD with Wilson quarks to be performed at significantly smaller quark masses than was possible before. Here we report on simulations of two-flavour QCD at sea-quark masses from slightly above to approximately 1/4 of the strange-quark mass, on lattices with up to 64x32^3 points and spacings from 0.05 to 0.08 fm. Physical sea-quark effects are clearly seen on these lattices, while the lattice effects appear to be quite small, even without O(a) improvement. A striking result is that the dependence of the pion mass on the sea-quark mass is accurately described by leading-order chiral perturbation theory up to meson masses of about 500 MeV.
- Jan 03 2006 hep-ph arXiv:hep-ph/0601013v3The HERA electron--proton collider has collected 100 pb$^{-1}$ of data since its start-up in 1992, and recently moved into a high-luminosity operation mode, with upgraded detectors, aiming to increase the total integrated luminosity per experiment to more than 500 pb$^{-1}$. HERA has been a machine of excellence for the study of QCD and the structure of the proton. The Large Hadron Collider (LHC), which will collide protons with a centre-of-mass energy of 14 TeV, will be completed at CERN in 2007. The main mission of the LHC is to discover and study the mechanisms of electroweak symmetry breaking, possibly via the discovery of the Higgs particle, and search for new physics in the TeV energy scale, such as supersymmetry or extra dimensions. Besides these goals, the LHC will also make a substantial number of precision measurements and will offer a new regime to study the strong force via perturbative QCD processes and diffraction. For the full LHC physics programme a good understanding of QCD phenomena and the structure function of the proton is essential. Therefore, in March 2004, a one-year-long workshop started to study the implications of HERA on LHC physics. This included proposing new measurements to be made at HERA, extracting the maximum information from the available data, and developing/improving the theoretical and experimental tools. This report summarizes the results achieved during this workshop.
- Jan 03 2006 hep-ph arXiv:hep-ph/0601012v3The HERA electron--proton collider has collected 100 pb$^{-1}$ of data since its start-up in 1992, and recently moved into a high-luminosity operation mode, with upgraded detectors, aiming to increase the total integrated luminosity per experiment to more than 500 pb$^{-1}$. HERA has been a machine of excellence for the study of QCD and the structure of the proton. The Large Hadron Collider (LHC), which will collide protons with a centre-of-mass energy of 14 TeV, will be completed at CERN in 2007. The main mission of the LHC is to discover and study the mechanisms of electroweak symmetry breaking, possibly via the discovery of the Higgs particle, and search for new physics in the TeV energy scale, such as supersymmetry or extra dimensions. Besides these goals, the LHC will also make a substantial number of precision measurements and will offer a new regime to study the strong force via perturbative QCD processes and diffraction. For the full LHC physics programme a good understanding of QCD phenomena and the structure function of the proton is essential. Therefore, in March 2004, a one-year-long workshop started to study the implications of HERA on LHC physics. This included proposing new measurements to be made at HERA, extracting the maximum information from the available data, and developing/improving the theoretical and experimental tools. This report summarizes the results achieved during this workshop.
- Nov 10 2005 hep-ph arXiv:hep-ph/0511119v1We provide an assessment of the impact of parton distributions on the determination of LHC processes, and of the accuracy with which parton distributions (PDFs) can be extracted from data, in particular from current and forthcoming HERA experiments. We give an overview of reference LHC processes and their associated PDF uncertainties, and study in detail W and Z production at the LHC. We discuss the precision which may be obtained from the analysis of existing HERA data, tests of consistency of HERA data from different experiments, and the combination of these data. We determine further improvements on PDFs which may be obtained from future HERA data (including measurements of $F_L$), and from combining present and future HERA data with present and future hadron collider data. We review the current status of knowledge of higher (NNLO) QCD corrections to perturbative evolution and deep-inelastic scattering, and provide reference results for their impact on parton evolution, and we briefly examine non-perturbative models for parton distributions. We discuss the state-of-the art in global parton fits, we assess the impact on them of various kinds of data and of theoretical corrections, by providing benchmarks of Alekhin and MRST parton distributions and a CTEQ analysis of parton fit stability, and we briefly presents proposals for alternative approaches to parton fitting. We summarize the status of large and small x resummation, by providing estimates of the impact of large x resummation on parton fits, and a comparison of different approaches to small x resummation, for which we also discuss numerical techniques.
- Jan 11 2005 hep-ph arXiv:hep-ph/0501067v2We construct a parametrization of the deep-inelastic structure function of the proton F_2 based on all available experimental information from charged lepton deep-inelastic scattering experiments. The parametrization effectively provides a bias-free determination of the probability measure in the space of structure functions, which retains information on experimental errors and correlations. The result is obtained in the form of a Monte Carlo sample of neural networks trained on an ensemble of replicas of the experimental data. We discuss in detail the techniques required for the construction of bias-free parameterizations of large amounts of structure function data, in view of future applications to the determination of parton distributions based on the same method.
- We present a model for string breaking based on the existence of chromoelectric flux tubes. We predict the form of the long-range potential, and obtain an estimate of the string breaking length. A prediction is also obtained for the behaviour with temperature of the string breaking length near the deconfinement phase transition. We plan to use this model as a guide for a program of study of string breaking on the lattice.