By Toby Lanser
The scientific method, and by extension scientific research, is founded upon pillars of guesswork and ambiguity. Questions are asked, guesses are made and then tested. Even when a hypothesis holds-up, the meaning is still unclear. Can these results even be trusted? Have the investigators used the correct controls? Have they accounted for potential variation within a population? How confident are we in the protocols and techniques used in the study, and are the conclusions drawn within reach relative to the data presented? Skepticism usually has a negative connotation, however, in the sciences, it is a coveted tool utilized in conducting and accepting good science.
It’s widely accepted by the general public that climate models and historical weather pattern analysis are accurate tools in predicting the weather patterns of the future—even daily weather patterns. When meteorologists cannot accurately predict precipitation, it is surprising how much validity the projected warmings of the next hundred years, estimations of receding ice caps, and the anticipated economic costs of increased natural disasters garner. To understand the complexity of climate predictions, take the stock market for example— a market-based representation of seven billion people's preferences grouped into a handful of stock exchanges. Excluding insider traders and the Oracle of Omaha, it is impossible to predict without a reasonable doubt the direction of a stock price; it’s glorified guesswork.
Now take the climate—the collection of such an absurdly large number of variables, many of which are undiscovered, that it makes predicting the stock market look simple. Even if one could use the same tools in climatology as in the stock market, the number of variables needed to be accounted for in assessing the leading causes of climate change would be nearly impossible. How trustworthy are these predictions, and should we base billion-dollar policy on them? In a paper published this year in Nature Climate Change titled “Challenges and opportunities for improved understanding of regional climate dynamics,” Collins et al. discuss the potential caveats, both conceptually and tangibly, of climate change projections.
Collins et al. suggests that the field of climatology should move away from the traditional measures of climate change due to their inability to “[provide] insight into the dynamical and physical processes that drive variability and change.” The reason being is that these predictions rely solely on empirical methods as opposed to quantitative ones. This is important because empirical methods provide “descriptive indicators” while quantitative theories enable “more reliable predictions and projections of climate to inform adaptation.” According to the authors, given our limited understanding of even seasonal climate dynamics, as well as the lack of quantitative data associated with storm sensitivity in relation to external factors (greenhouse gases for example), empirical methods, or “descriptive factors,” aren’t reliable enough to model climate patterns and predict changes.
Additionally, the authors use the recent global warming “hiatus” as an example of the disparity between predicted and measured weather patterns. For context, this hiatus occurred between 1998 until around 2013. To visualize this, the authors use a global Sea Surface Temperature (SST) map from 1979-2012 showing the disparity between projected SST and observed SST (Fig. 1).
Finally, the authors present multiple suggestions for creating accurate climate projections, the first being to include more variables in predictions. As mentioned earlier, the variables known to influence the climate are largely unknown. Here, in a method named “high-resolution coupled modeling,” Collins et al. suggest including the features of both the coastlines and mountains, since it has been theorized that these regions play a role in regional change which ultimately affects the global climate. Another suggestion is a theory called “Complex diagnostics and simplified models.” This approach takes a more “Occam’s razor” view of modeling. Because of the extraordinarily large number of variables that projections need to consider (dynamics, physics, biogeochemical cycles, etc.), it might be best to simplify the approach to modeling by employing a basic cooling relative to radiation, paired with well-established aquaplanet simulations. This approach, though simple, allows scientists to take a more interdisciplinary approach to projections by adopting well-observed dynamics of planetary atmospheres, which “provide[s] insight and allow[s] us to develop our theoretical understanding of climate dynamics in the complex Earth.”
The authors do not question the legitimacy of a changing climate. Rather, they provide much-needed skepticism into the nature and causes of a field that has grown to be a public concern during the last thirty years. Hopefully, the theorized models discussed by Collins et al. can be implemented with success in the future, enabling scientists and the general public alike to make more informed decisions on the causes and effects of a changing climate and what that means for humanity in the latter half of the twenty-first century and beyond.
References:
Collins, M., Minobe, S., Barreiro, M., Bordoni, S., Kaspi, Y., Kuwano-Yoshida, A., … Zolina, O. (2018). Challenges and opportunities for improved understanding of regional climate dynamics. Nature Climate Change, 8(2), 101-108. doi:10.1038/s41558-017-0059-8 https://www.nature.com/articles/s41558-017-0059-8
Comentários