Estimating the galaxy two-point correlation function using a split random catalog

Show full item record



Permalink

http://hdl.handle.net/10138/310981

Citation

Keihänen , E , Kurki-Suonio , H , Lindholm , V , Viitanen , A , Suur-Uski , A -S , Allevato , V , Branchini , E , Marulli , F , Norberg , P , Tavagnacco , D , Torre , S D L , Väliviita , J , Viel , M , Bel , J , Frailis , M & Sánchez , A G 2019 , ' Estimating the galaxy two-point correlation function using a split random catalog ' , Astronomy & Astrophysics , vol. 631 , A73 . https://doi.org/10.1051/0004-6361/201935828

Title: Estimating the galaxy two-point correlation function using a split random catalog
Author: Keihänen, E.; Kurki-Suonio, H.; Lindholm, V.; Viitanen, A.; Suur-Uski, A. -S.; Allevato, V.; Branchini, E.; Marulli, F.; Norberg, P.; Tavagnacco, D.; Torre, S. de la; Väliviita, J.; Viel, M.; Bel, J.; Frailis, M.; Sánchez, A. G.
Contributor: University of Helsinki, Particle Physics and Astrophysics
University of Helsinki, Department of Physics
University of Helsinki, Helsinki Institute of Physics
University of Helsinki, Helsinki Institute of Physics
University of Helsinki, Particle Physics and Astrophysics
University of Helsinki, Department of Physics
University of Helsinki, Particle Physics and Astrophysics
Date: 2019-10-22
Language: eng
Number of pages: 11
Belongs to series: Astronomy & Astrophysics
ISSN: 0004-6361
URI: http://hdl.handle.net/10138/310981
Abstract: The two-point correlation function of the galaxy distribution is a key cosmological observable that allows us to constrain the dynamical and geometrical state of our Universe. To measure the correlation function we need to know both the galaxy positions and the expected galaxy density field. The expected field is commonly specified using a Monte-Carlo sampling of the volume covered by the survey and, to minimize additional sampling errors, this random catalog has to be much larger than the data catalog. Correlation function estimators compare data-data pair counts to data-random and random-random pair counts, where random-random pairs usually dominate the computational cost. Future redshift surveys will deliver spectroscopic catalogs of tens of millions of galaxies. Given the large number of random objects required to guarantee sub-percent accuracy, it is of paramount importance to improve the efficiency of the algorithm without degrading its precision. We show both analytically and numerically that splitting the random catalog into a number of subcatalogs of the same size as the data catalog when calculating random-random pairs, and excluding pairs across different subcatalogs provides the optimal error at fixed computational cost. For a random catalog fifty times larger than the data catalog, this reduces the computation time by a factor of more than ten without affecting estimator variance or bias.
Subject: astro-ph.CO
115 Astronomy, Space science
Rights:


Files in this item

Total number of downloads: Loading...

Files Size Format View
aa35828_19.pdf 380.1Kb PDF View/Open

This item appears in the following Collection(s)

Show full item record