Tag: brajard

Enhancing Seasonal Forecast Skills by Optimally Weighting the Ensemble from Fresh Data

Brajard, J., Counillon, F., Wang, Y., Kimmritz, M. 2023: Enhancing Seasonal Forecast Skills by Optimally Weighting the Ensemble from Fresh Data. Weather and Forecasting. https://doi.org/10.1175/WAF-D-22-0166.1

Summary: Dynamical climate predictions are produced by assimilating observations and running ensemble simulations of Earth system models. This process is time consuming and by the time the forecast is delivered, new observations are already available, making it obsolete from the release date. Moreover, producing such predictions is computationally demanding, and their production frequency is restricted. We tested the potential of a computationally cheap weighting average technique that can continuously adjust such probabilistic forecasts—in between production intervals—using newly available data. The method estimates local positive weights computed with a Bayesian framework, favoring members closer to observations. We tested the approach with the Norwegian Climate Prediction Model (NorCPM), which assimilates monthly sea surface temperature (SST) and hydrographic profiles with the ensemble Kalman filter. By the time the NorCPM forecast is delivered operationally, a week of unused SST data are available. We demonstrate the benefit of our weighting method on retrospective hindcasts. The weighting method greatly enhanced the NorCPM hindcast skill compared to the standard equal weight approach up to a 2-month lead time (global correlation of 0.71 vs 0.55 at a 1-month lead time and 0.51 vs 0.45 at a 2-month lead time). The skill at a 1-month lead time is comparable to the accuracy of the EnKF analysis. We also show that weights determined using SST data can be used to improve the skill of other quantities, such as the sea ice extent. Our approach can provide a continuous forecast between the intermittent forecast production cycle and be extended to other independent datasets.

Link to publication. You are most welcome to contact us or the corresponding author(s) directly, if you have questions.

Super-resolution data assimilation

Barthélémy, S., Brajard, J., Bertino, L., Counillon, F. 2022: Super-resolution data assimilation. Ocean Dyn. https://doi.org/10.1007/s10236-022-01523-x

Summary: Increasing model resolution can improve the performance of a data assimilation system because it reduces model error, the system can more optimally use high-resolution observations, and with an ensemble data assimilation method the forecast error covariances are improved. However, increasing the resolution scales with a cubical increase of the computational costs. A method that can more effectively improve performance is introduced here. The novel approach called “Super-resolution data assimilation” (SRDA) is inspired from super-resolution image processing techniques and brought to the data assimilation context. Starting from a low-resolution forecast, a neural network (NN) emulates the fields to high-resolution, assimilates high-resolution observations, and scales it back up to the original resolution for running the next model step. The SRDA is tested with a quasi-geostrophic model in an idealized twin experiment for configurations where the model resolution is twice and four times lower than the reference solution from which pseudo-observations are extracted. The assimilation is performed with an Ensemble Kalman Filter. We show that SRDA outperforms both the low-resolution data assimilation approach and a version of SRDA with cubic spline interpolation instead of NN. The NN’s ability to anticipate the systematic differences between low- and high-resolution model dynamics explains the enhanced performance, in particular by correcting the difference of propagation speed of eddies. With a 25-member ensemble at low resolution, the SRDA computational overhead is 55 percent and the errors reduce by 40 percent, making the performance very close to that of the high-resolution system (52 percent of error reduction) that increases the cost by 800 percent. The reliability of the ensemble system is not degraded by SRDA.

Link to publication. You are most welcome to contact us or the corresponding author(s) directly, if you have questions.

Twenty-one years of phytoplankton bloom phenology in the Barents, Norwegian and North seas

Silva, E.F.F., Counillon, F., Brajard, J., Korosov, A., Pettersson, L., Samuelsen, A., Keenlyside, N. 2021: Twenty-one years of phytoplankton bloom phenology in the Barents, Norwegian and North seas. Front Mar Sci.  https://doi.org/10.3389/fmars.2021.746327 .

For en flott oppsummering på norsk, les denne artikkelen av vår samarbeidspartner, Climate Futures.

Summary: Phytoplankton blooms provide biomass to the marine trophic web, contribute to the carbon removal from the atmosphere and can be deadly when associated with harmful species. This points to the need to understand the phenology of the blooms in the Barents, Norwegian, and North seas. We use satellite chlorophyll-a from 2000 to 2020 to assess robust climatological and the interannual trends of spring and summer blooms onset, peak day, duration and intensity. Further, we also correlate the interannual variability of the blooms with mixed layer depth (MLD), sea surface temperature (SST), wind speed and suspended particulate matter (SPM) retrieved from models and remote sensing. The climatological spring blooms start on March 10th and end on June 19th. The climatological summer blooms begin on July 13th and end on September 17th. In the Barents Sea, years of shallower mixed layer (ML) driven by both calm waters and higher freshwaters input keeps the phytoplankton in the euphotic zone, causing the spring bloom to start earlier and reach higher biomass but end sooner due to the lack of nutrients upwelling from the deep. In the Norwegian Sea, a correlation between SST and the spring blooms is found. Here, warmer waters are correlated to earlier and stronger blooms in most regions but with later and weaker blooms in the eastern Norwegian Sea. In the North Sea, years of shallower ML reduces the phytoplankton sinking below the euphotic zone and limits the SPM increase from the bed shear stress, creating an ideal environment of stratified and clear waters to develop stronger spring blooms. Last, the summer blooms onset, peak day and duration have been rapidly delaying at a rate of 1.25-day year–1, but with inconclusive causes based on the parameters assessed in this study.

Link to publication. You are most welcome to contact us or the corresponding author(s) directly, if you have questions.

Combining data assimilation and machine learning to emulate a dynamical model from sparse and noisy observations: A case study with the Lorenz 96 model.

Brajard, J., Carrassi, A., Bocquet, M., Bertino, L. 2020: Combining data assimilation and machine learning to emulate a dynamical model from sparse and noisy observations: A case study with the Lorenz 96 model. Geoscientific Model Development. https://doi.org/10.1016/j.jocs.2020.101171 .

Summary: A novel method, based on the combination of data assimilation and machine learning is introduced. The new hybrid approach is designed for a two-fold scope: (i) emulating hidden, possibly chaotic, dynamics and (ii) predicting their future states. The method consists in applying iteratively a data assimilation step, here an ensemble Kalman filter, and a neural network. Data assimilation is used to optimally combine a surrogate model with sparse noisy data. The output analysis is spatially complete and is used as a training set by the neural network to update the surrogate model. The two steps are then repeated iteratively. Numerical experiments have been carried out using the chaotic 40-variables Lorenz 96 model, proving both convergence and statistical skill of the proposed hybrid approach. The surrogate model shows short-term forecast skill up to two Lyapunov times, the retrieval of positive Lyapunov exponents as well as the more energetic frequencies of the power density spectrum. The sensitivity of the method to critical setup parameters is also presented: the forecast skill decreases smoothly with increased observational noise but drops abruptly if less than half of the model domain is observed. The successful synergy between data assimilation and machine learning, proven here with a low-dimensional system, encourages further investigation of such hybrids with more sophisticated dynamics.

Link to publication. You are most welcome to contact us or the corresponding author(s) directly, if you have questions.