Roger Pielke Jr, the well-known political and climate scientist from Boulder, Colorado, gave a very interesting talk for the Irish Climate Science Forum titled Climate Scenarios, Weather Attribution – Is There Hope for Scientific Integrity?
I am very familiar with the work of Roger as we discussed many of the controversial topics (extreme weather, disaster losses and extreme scenarios) that he is involved with in the Clintel book The Frozen Climate Views of the IPCC (you can order the book in our webshop, there are versions in English, Dutch, German and Danish).

At Clintel we notice that A) so many interesting talks, podcasts, interviews etc. are being published that it is hard to keep track of them B) that we suffer from a lack of manpower (partly due to medical issues within our team) and C) that AI tools such as ChatGPT, Grok, Perplexity and Claude are getting better each day.

I am quite active on X (you can also follow our international Clintel account) and therefore use Grok a lot, the AI tool directly available inside X. But the other tools just mentioned are also great. What I did in this case to produce the article below were the following steps:

  1. I watched the talk live on the evening it was given.
  2. When it was uploaded on Youtube I clicked on the button “show transcript”. This opens a new field at the right hand side of the screen show an automated transcript of the talk. This contains errors, especially in names.
  3. I copied the transcript to Word and cleaned it up, especially the names.
  4. I then uploaded the Word document to Grok and asked: “could you write a 1000 word summary article of this talk”?
  5. Within seconds it generated the article shown below.
  6. I then went though the article to check both the content and the language and to add weblinks. I only had to do some light editing.

It’s amazing how well this article is written (check yourself). It gives a very accurate overview of the topics that were discussed by Roger. Very often, we want to alert the reader to new videos or reports. Grok or other AI tools can be of great help with this, because in that case it’s not necessarily important to give our personal opinion about the content. AI tools are great in very dryly summarizing a report, article or talk. This is not out of laziness on our side. We simply can produce many more articles a week if we use the assistance of AI, which leaves more time to write our own articles and reports.

Summary article about Roger Pielke Jr’s talk, given online on 23 May 2025

In a very interesting lecture titled Climate Scenarios, Weather Attribution – Is There Hope for Scientific Integrity?, Roger Pielke Jr., a political scientist and senior fellow at the American Enterprise Institute, delves into the challenges facing scientific integrity within climate science. Drawing from his extensive work on his blog, The Honest Broker Newsletter, Pielke explores issues surrounding the misuse of data, flawed methodologies, and the politicization of climate research. His talk, rooted in three case studies, underscores the importance of self-correction in science and questions whether the climate research community is upholding this principle. Below is a summary of his key arguments, which highlight systemic issues in climate science and their implications for policy and public trust.

The Importance of Scientific Integrity
Pielke begins by emphasizing that scientific integrity is foundational to the scientific process. Quoting Carl Sagan, he notes that science thrives on being wrong and correcting itself through criticism and testing. The U.S. National Academy of Sciences reinforces this, stating that researchers have an obligation to ensure the integrity of their data. Pielke clarifies that his talk is not about denying human-caused climate change or dismissing the need for mitigation and adaptation policies. Instead, it focuses on how compromises in scientific integrity undermine the credibility of climate science. He argues that the importance of climate change does not justify cutting corners, as this erodes trust and hampers effective policymaking.

Pielke structures his talk around three case studies: the misuse of the ICAT hurricane damage dataset, the problematic NOAA billion-dollar disasters dataset, and the implausible climate scenarios used by the Intergovernmental Panel on Climate Change (IPCC). Each case illustrates a failure of self-correction and raises questions about how scientific research influences policy.

Case Study 1: The ICAT Hurricane Damage Dataset
The first case Pielke examines involves the ICAT hurricane damage dataset, which originated from his work with colleagues at the National Center for Atmospheric Research (NCAR). In the 1990s, Pielke and Chris Landsea identified a paradox: while hurricane activity in the North Atlantic was at a historic low from 1991 to 1994, hurricane-related damages were unprecedentedly high. Their research showed that societal factors, such as population growth and wealth accumulation in coastal areas, drove the increase in damages, not climate trends. To make this data accessible to the insurance industry, Pielke collaborated with ICAT, a local insurance company, to create a damage estimator tool based on their normalized hurricane loss dataset.

However, after Pielke’s involvement with ICAT ended, the dataset was altered without proper documentation. Subsequent studies, including a 2019 paper by Grinsted et al. published in the Proceedings of the National Academy of Sciences (PNAS), used this modified dataset to claim an increase in normalized hurricane damages, contradicting Pielke’s findings. Pielke discovered that the altered dataset incorporated inflated loss estimates from NOAA’s billion-dollar disasters dataset, creating a “Frankenstein dataset” that mixed inconsistent methodologies. Despite its flaws, this study was cited in the IPCC’s Sixth Assessment Report (AR6) and the U.S. Fifth National Climate Assessment, ignoring a broader literature of over 60 normalization studies, only one of which (Grinsted’s) attributed disaster losses to climate change (see also chapter 12 of the Clintel book The Frozen Climate Views of the IPCC titled Extreme views on disasters).

Pielke’s efforts to correct this misuse through peer-reviewed critiques and calls for retraction were met with resistance from journal editors, highlighting a failure of self-correction. A later study by Willoughby et al. repeated the error, further perpetuating the flawed dataset’s influence. Pielke argues that such oversights not only misrepresent hurricane trends but also mislead policymakers relying on these findings.

Pielke recently published a peer reviewed paper detailing the issues with the ICAT dataset.

Case Study 2: NOAA’s Billion-Dollar Disasters Dataset
The second case focuses on NOAA’s billion-dollar disasters dataset, initially created as a public relations tool to highlight disaster costs. Pielke critiques its transformation into a supposedly rigorous scientific dataset without methodological rigor. He notes that the dataset lacks transparency, with no metadata explaining how loss estimates are compiled or by whom. Over time, changes in methodology, such as aggregating disparate events into single “billion-dollar disasters,” inflated the number of reported events, creating a false impression of increasing disaster frequency.

Pielke’s 2023 paper in a Nature journal applied NOAA’s own scientific integrity standards and found the dataset lacking in reproducibility and consistency. For instance, he uncovered 18 different versions of the dataset over four years, with undocumented changes and irregular inflation adjustments. Despite its prominence in policy discussions, including claims by President Biden and the U.S. Department of Treasury linking it to climate change, the dataset’s flaws undermine its reliability.

In a surprising turn, NOAA discontinued the dataset in 2025, shortly before Pielke was set to testify before the House Science Committee. While some attributed this to political interference, Pielke learned that the dataset’s sole maintainer had retired, leaving NOAA without the capacity to continue it. This underscores a critical flaw: the dataset’s dependence on a single individual rather than standardized, replicable methods. Pielke advocates for economists, not climatologists, to oversee disaster cost tracking, suggesting agencies like the Bureau of Economic Analysis could provide more reliable data. He points out that, contrary to the dataset’s narrative, disaster costs relative to U.S. GDP have remained flat or declined, a positive trend obscured by misleading reporting.

Case Study 3: Implausible Climate Scenarios
The third and most significant case concerns the IPCC’s use of extreme climate scenarios, particularly RCP 8.5 and its successor, SSP3-7.0. Pielke explains that climate scenarios are not predictions but plausible futures based on socioeconomic assumptions. However, he argues that the IPCC’s scenario development process has prioritized radiative forcing and temperature targets over realistic socioeconomic projections, leading to implausible scenarios.

RCP 8.5, labeled as a “business-as-usual” scenario, assumes a massive increase in coal consumption and a global population of 13 billion by 2100—projections Pielke deems unrealistic based on current energy trends and demographic forecasts. His research with colleagues Matt Burgess and Justin Ritchie shows that real-world emissions and energy outlooks align with more moderate scenarios (2–3°C warming by 2100), not the extreme RCP 8.5 used in thousands of studies and policy discussions. The IPCC’s AR6 further endorsed SSP3-7.0 as a “current policy” scenario, despite its reliance on unlikely assumptions like 7,500 new coal plants untill 2100.

Pielke criticizes the lack of scrutiny in scenario development, noting that a small, unaccountable group of experts shapes these scenarios, which influence global policy for decades. He argues that this process, which starts with desired temperature outcomes rather than plausible socioeconomic assumptions, is fundamentally flawed and risks misguiding policy.

The Path Forward
Pielke concludes by questioning why a select group of experts controls scenario development and why flawed studies persist in the literature. He recounts a telling anecdote: an IPCC working group chair privately acknowledged the implausibility of RCP 8.5 but felt unable to say so publicly due to institutional constraints. This highlights a broader issue: the climate research community’s reluctance to self-correct, driven by institutional pressures and the politicization of science.

Despite these challenges, Pielke remains optimistic about the need for scientific assessments like those of the IPCC, which synthesize vast amounts of research. However, he stresses that integrity must be prioritized to ensure accurate information informs policy. His recommendations include greater transparency in data handling, rigorous peer review, and involving economists in disaster cost assessments. He also calls for broader participation in scenario development to ensure plausibility and relevance.

Conclusion
Roger Pielke Jr.’s lecture is a clarion call for restoring scientific integrity in climate science. Through detailed case studies, he exposes how flawed datasets and implausible scenarios have skewed perceptions of climate impacts, from hurricane damages to disaster costs and future warming. These missteps, compounded by resistance to correction, risk undermining public trust and policy effectiveness. Pielke’s work underscores the need for transparency, accountability, and a commitment to self-correction to ensure that climate science serves as a reliable foundation for addressing one of the world’s most pressing challenges.