Climate Attribution Flashcards
ESD
climate change attribution -> determine whether observed changes in weather = caused by cliamte change
more certainty into the impact of humans -> 1995 = humans had a discernable influence -> 2021 = human activity is the dominating cause from pre-industrial times
modelling -> can be used when the observation record < 17 years (Trenberth, 2012)
attribution studies are conducted testing for causation between climate variables -> one of the greatest triumphs of the 21st century
early anthropogenic research -> experiments
1856 - Foote = warmed glass cylinders -> found co2 was most impressionable
1861 - Tyndall = analysed impacts of temperatures changeson ghgs
1896 - Arrhenius = ghgh on the earth’s surface in the form of a model
1938 - Callendar = modelled anthropogenic impacts on temperature
early climate models prior to the 1990s failed to represent water vapour
they were unstable and would get warmer as the ocean would be evaporated away
climate models = simplification of the processes taking place across the earth
they are forced and a series of outputs in the form of a simulation or projection are created
climate models = gridded = the information is processed across these grids at smaller scales -> often parameterisation takes place (McGuffie and Henderson-Sellers, 2014)
the models are then often forced with variables e.g. ghgs, aersols
early attribution modelling -> model was forced (pre-industrial co2 and current cos etc…) and the output was compared to an unforced model
equilibrium models -> had a coarse resolution and were coupled = models were inaccurate
Mitchell et al., 1995 -> HadCRUT (2.5x3.75 w/ 20 oceanic + 19 atmospheric layers) -> forced co2 and aerosol concentrations
took several years to run = forced the climate and a correlation was taken to see if it matched observation records -> aerosol component not depicted well in models as it assumed lower planetary albefo
in the 1990s
we were able to model the natural variability of the climate
1995 IPCC report -> low resolution modelling but had patterns which identified warming and cooling
transient models -> these were coupled models which had improved resolutions from the equilibrium models
Santer et al., 1996 -> forced, aerosols, ozone and co2 concentrations to determine vertical atmospheric changes
assumed it was linear = did not account for interactions between the ghgs and how these would implicate the atmosphere and forcing only run once = 1 sample size
hypothesis testing era -> realised that volcanic aerosols not as impactful across the midlatitudes due to the winds = more impactful across the tropics by nullifying the impacts of co2
ozone hole -> input into climate models and its effects on the atmosphere
Tett et al., 1999 -> (HadCM2) did the same as Santer but did not assume linearity between the mechanisms = forced (ghg), (ghg + sulphates), (ghg + sulphates + ozone) -> first time that natural and anthropogenic variables were split
tested against the observed ‘fingerprint warming’ = atmospheric warming with cooling in the stratosphere, warming in the mid-upper troposphere, which is intensified most in the tropics, and warming but at a slower rate at the surface. Tett’s forced modelling = sulphates in the forcing led to a cooler stratosphere, ozone exacerbated this cooling and better represented the radiosonde observation record -> later halved the ozone concentration and had a better representation
Stott et al., 2000 -> HadCM3 = different climate runs to highlight that the initial conditions were not producing the simulate climate
meant that human activity was the cause or that models are not proficient enough in simulations -> maybe deep sea current not being depicted
Tett et al., 2000 -> forced variables to please model/climate deniers with solar radiation and volcanic aerosols
Tett et al., 2002 -> improved early research -> determine how much warming was attributable to human activity = 0.5degrees
model uncertainty is determined
by running the model multiple times = analyse to see whether there is significant change = ensemble models produce the best results as the signal is clearer e.g. CMIP3 and 5 (McGuffie and Henderson-Sellers, 2014)
improvements in modelling -> more data being incorporated e.g. soil moisture content in attributing the 2015 Ethiopian drought (Phillip et al., 2017)
increased computational power = higher resolution and more runs e.g. weather@home (Phillip et al., 2018)
attribution of extreme events -> emerged from improvements in modelling
focuses on determining whether an extreme event was in line with weather patterns or was created due to climate change
EEA -> Technique 1 (Risk-based approach) (Shepherd, 2016) -> models used to produce counterfactual simulations (weather forecasts based on lots of different initial conditions) and a factual data set (forecast based on the observed conditions)
statistical analysis to determine the likelihood of the extreme event = fractional attributable risk
for extreme event attribution there needs to be a strict definition e.g. temporal and spatial boundaries
e.g. overflow rates for flooding or peak days of precipitation e.g. England and Wales wettest periods 2000 (Pall et al., 2011)
extreme event attribution -> first paper -> Stott et al., 2004 = human contribution to the European heatwave of 2003
paper produced a grid box over c. Eur -> average temperature threshold under the current record but above the previous record
modelled the different simulations to determine that human activity did contribute
modelling inaccuracies -> some issues with land and atmospheric feedbacks e.g. vegetation dying would have exacerbated the drought through albedo induced shifts
EEA -> Technique 2 (Storyline Approach) -> analyse the atmospheric conditions which led to the climate event -> see if these were anthropogenically driven or not -> known as partial attribution as dynamical processes not all considered
first used -> 2013 to analyse the 2011 Texan drought (Hoerling et al., 2013)
climate attribution = observational record compared to forced variable shifts to determine human contribution to planetary change
event attribution = observations used to define the event to determine the degree of human influence in its formation
Phillip et al., 2018 -> 3-day May floods in France 2016
data and models validated for the extreme event (5 climate models were used to simulate the precipitation over the River Seine and Loire from April-June in the form of analysing rates for 3 days which could then be used to determine the return period) -> return period for precipitation rates was produced (GEV distribution and multiplicative bias correction), trend produced = risk ratio (likelihood of a similar event occurring in a climate without human influence), and an attribution of the trend was conducted -> anthropogenic activity increased the precipitation rates over the River Seine 2.2x and over the River Loire 1.9x.
Philip et al., 2018 -> criticsms
different paper was published 10 days after the event -> similar conclusions -> rejected since it was too soon published -> wanted a more rigorous analysis of the data.
wanted to determine the contribution of human influence to the floods -> study was criticised as it failed to incorporate topography, land-use change and ground saturation rates.
Jones et al., 2008 -> NH summertime temperatures
HadGEM1 and CRUTEM3v -> analyse how anthropogenic climate change impacting NH summertime temperatures = made possible through better simulations of the NH and therefore could determine how human activity contributed to the warm periods (Jones et al., 2008).
Focus on using observed data -> simulated an All run, a ANTHRO run and a NATURAL run for the NH -> comparisons with the real-time data then compared to the models = ALL run aligned best with the observed data with multidecadal variability -> NATURAL aligned most poorly -> correlations used to determine this = ALL simulation 0.84 could explain 70% of variability.
Applied Stott’s method across 14 regions in the NH -> optima detection to analyse the role of each of the greenhouse gases on the observed and simulated temperatures.
ANTHRO = observed temperatures and therefore warmer NH summers.
Otto et al., 2018 -> 2010 Thailand Flooding (monsoon + TS) -> 10-20th October
observational data in the form of precipitation is influenced too heavily by seasonal variability and spatial coverage low for the 0.5x0.5 gridded precipitation data set (1979-present) = did calculate a 2.5 year return period for the precipitation rates > pre-industrial (nonanthropogenic warming) = 4.8x increase in risk.
models struggle to capture convection -> convective parameters have only started being used more recently e.g. 25km coarse model within the Pall et al. (2019) paper
overcome this by analysing the weather patterns which produce these precipitation rates e.g. low pressure – note can only really be used over the midlatitudes due to the relationship of low pressure and precipitation rates -> at the same time over the tropics MCCs are too small-scale to be detected by models (McGuffie and Henderson-Sellers, 2014).
models struggle to depict mountains and ocean ->
HadCM3 model in Stott et al. (2000)’s research -> unable to simulate over the N.A. as it could not incorporate the NAO + (McGuffie and Henderson-Seller, 2014)
improvements in the resolution quality of modelling
number of rows and boxes in gridding has improved e.g. 1970s models have a grid of around 500km and there were 10 vertical layers while more current research has a spatial resolution of around 100km and 50-60 layers (McGuffie and Henderson-Sellers, 2014).
World Attribution Service
real-time modern-day attribution research -> apply the techniques of attribution as an event takes place to determine the impact of climate change on the real-time occurrence of the event e.g. politicisation of extreme attribution in science.
Teleconnections -> ENSO exacerbating climatic conditions
La Niña 2011 -> increased atmospheric water vapour around China, Indian and Pakistan (Trenberth, 2012).