English   español  
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/213477
logo share SHARE logo core CORE   Add this article to your Mendeley library MendeleyBASE

Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL | DATACITE
Exportar a otros formatos:


Observational uncertainty and regional climate model evaluation: A pan-European perspective

AuthorsKotlarski, Sven; Szabó, Péter; Herrera, S. ; Räty, Olle; Keuler, Klaus; Soares, Pedro M. M.; Cardoso, Rita M.; Bosshard, Thomas; Pagé, C.; Boberg, Fredrik; Gutiérrez, José M. ; Isotta, Francesco A.; Jaczewski, Adam; Kreienkamp, Frank; Liniger, Mark A.; Lussana, Cristian; Pianko-Kluczyńska
RCM evaluation
Issue Date2019
PublisherJohn Wiley & Sons
CitationInternational Journal of Climatology 39(9): 3730-3749 (2019)
AbstractThe influence of uncertainties in gridded observational reference data on regional climate model (RCM) evaluation is quantified on a pan-European scale. Three different reference data sets are considered: the coarse-resolved E-OBS data set, a compilation of regional high-resolution gridded products (HR) and the European-scale MESAN reanalysis. Five high-resolution ERA-Interim-driven RCM experiments of the EURO-CORDEX initiative are evaluated against each of these references over eight European sub-regions and considering a range of performance metrics for mean daily temperature and daily precipitation. The spatial scale of the evaluation is 0.22°, that is, the grid spacing of the coarsest data set in the exercise (E-OBS). While the three reference grids agree on the overall mean climatology, differences can be pronounced over individual regions. These differences partly translate into RCM evaluation uncertainty. For most cases observational uncertainty is smaller than RCM uncertainty. Nevertheless, for individual sub-regions and performance metrics observational uncertainty can dominate. This is especially true for precipitation and for metrics targeting the wet-day frequency, the pattern correlation and the distributional similarity. In some cases the spatially averaged mean bias can also be considerably affected. An illustrative ranking exercise highlights the overall effect of observational uncertainty on RCM ranking. Over individual sub-domains, the choice of a specific reference can modify RCM ranks by up to four levels (out of five RCMs). For most cases, however, RCM ranks are stable irrespective of the reference. These results provide a twofold picture: model uncertainty dominates for most regions and for most performance metrics considered, and observational uncertainty plays a minor role. For individual cases, however, observational uncertainty can be pronounced and needs to be definitely taken into account. Results can, to some extent, also depend on the treatment of precipitation undercatch in the observational reference.
Publisher version (URL)https://doi.org/10.1002/joc.5249
Identifiersdoi: 10.1002/joc.5249
e-issn: 1097-0088
issn: 0899-8418
Appears in Collections:(IFCA) Artículos
Files in This Item:
File Description SizeFormat 
accesoRestringido.pdf15,38 kBAdobe PDFThumbnail
Show full item record
Review this work

WARNING: Items in Digital.CSIC are protected by copyright, with all rights reserved, unless otherwise indicated.