Even as Helene’s high waters and media attention recede, the losses deepen.

As of this writing, it’s been two weeks since Hurricane Helene made landfall in the Big Bend area of the Florida coast and barreled inland, flooding a 500-mile-plus swath, most notably in Appalachian North Carolina. The known death toll, which now exceeds 200, continues to rise. An additional toll will be exacted from injuries, from immediate health hazards of the flooding, and from the devastating long-term psychological and spiritual damage that survivors will continue to experience over the coming months and years.

News coverage has been extensive. Two pieces illustrate the challenges and give detail of what to expect. The first, from the Washington Post and dated Thursday, October 3, headlines the thousands of uninsured homes in the hurricane’s path. Some excerpts:

…On average, just a tiny fraction of households in the inland counties hit hardest by Hurricane Helene and its remnants had flood insurance, according to a Washington Post analysis of recent data from the National Flood Insurance Program. Across seven affected states,only 0.8 percent of homes in inland counties affected by the storm had flood insurance

Experts say that lack of insurance will prove deeply damaging for those households in the years to come, adding to the overall toll of the devastating storm

Available disaster assistance funds are largely intended to pay for temporary shelter, food and water — not to rebuild homes. And thanks to a combination of outdated policies and high prices, most people don’t know they should enroll in flood insurance — or can’t afford it.

Without insurance, people struck by floods have to rely on a network of complicated federal programs or aid from nonprofits to rebuild their lives. The Individual Assistance Program, run by FEMA, can help provide urgent resources but is capped at around $42,500 for housing and $42,500 for other costs.Most recipients get far less

A second article, published by Rachel Young and Solomon Hsiang in the journal Nature on Wednesday, October 2, and entitled Mortality caused by tropical cyclones in the United States, looks at the health impacts of such losses. That same day’s New York Times provides a readable summary. From the NYT:

Over the past week, the official death toll from Hurricane Helene has surpassed 100 as the vortex creeping inland from Florida submerged homes and swept away cars. But the full weight of lost lives will be realized only years from now — and it could number in the thousands…  

Looking at 501 events from 1930 to 2015, researchers found that the average tropical storm resulted in an additional 7,000 to 11,000 deaths over the 15 years that followed.

Overall during the study period, tropical storms killed more people than automobile crashes, infectious diseases and combat for U.S. soldiers… 

The pair used a technique that has also provided a more complete understanding of “excess deaths” caused by Covid-19 and heat waves. It works by looking at typical mortality patterns and isolating anomalies that could have been caused only by the variable under study — in this case, a sizable storm.

Prior research analyzed data over shorter periods and focused on disruption of health care for dementia and cancer patients, et al.

Young and Hsiang, as well as earlier researchers, see the longer-term impacts as understandable in view of home loss, financial stress associated with the spend-down of retirement savings to cover uninsured losses, the deterioration of health and safety services at the community level, increased substance abuse, and more.

Given these realities, it’s not much of a stretch to draw similar conclusions about the financial and health impacts of other disasters such as wildfire and earthquake – and anticipate a future filled with such losses as far as the eye can see.

What is the way out of this downward spiral of repetitive loss and continuing public suffering – of such extent as to compromise the social fabric of the country?

A third article, published just yesterday, provides a hint.

According to the New York Times, North Carolina legislative decisions played a role in their state’s vulnerability. An excerpt:

…The amount of rain that Tropical Storm Helene unleashed over North Carolina was so intense, no amount of preparation could have entirely prevented the destruction that ensued.

But decisions made by state officials in the years leading up to Helene most likely made some of that damage worse, according to experts in building standards and disaster resilience.

Over the past 15 years, North Carolina lawmakers have rejected limits on construction on steep slopes, which might have reduced the number of homes lost to landslides; blocked a rule requiring homes to be elevated above the height of an expected flood; weakened protections for wetlands, increasing the risk of dangerous storm water runoff; and slowed the adoption of updated building codes, making it harder for the state to qualify for federal climate-resilience grants.

Those decisions reflect the influence of North Carolina’s home building industry, which has consistently fought rules forcing its members to construct homes to higher, more expensive standards, according to Kim Wooten, an engineer who serves on the North Carolina Building Code Council, the group that sets home building requirements for the state.

This NYT article is framed in the negative but contains a positive point: to break the vicious cycle of disaster loss and its pernicious effect on America’s prospects will require policy formulation and legislation at state and national levels. And in a democracy, that in turn requires a public that is on the same page and holding its elected officials accountable – not a public that is bitterly polarized and obfuscating very real challenges with misinformation and false facts – all while the disaster survivors live in an unending Groundhog-Day nightmare of misery.

One special psychological and spiritual challenge for Hurricane Helene survivors will be the fleeting national attention to their needs. Americans are well-intended but by tomorrow at this time attention will have shifted to the impacts of Hurricane Milton. FEMA and other national and state agencies, already stretched thin, will have to accommodate an additional set of urgent needs. In a few weeks, the outcome of the 2024 federal and state elections and their aftermath will replace hurricanes as the national focus.

As individuals and a nation, we can and must remain mindful of those among us recovering from disasters and provide sustained help, and all the more so as life moves on.

Posted in Uncategorized | Leave a comment

Impact! Another NOAA vignette…

…following up from the previous LOTRW post, recalling an occasion from nearly forty years ago (stop me if you’ve heard this). The event? An annual NOAA/OAR management retreat sometime in the 1980’s. The NOAA Environmental Research Laboratory directors had gathered in Rockville from across the country, along with DC-based OAR leadership. I was a newly-minted and not-quite mainstream member of the group. The ERL lab directors were all in the Senior Executive Service – they’d been grandfathered-in when the SES had been established a few years prior. I was head of a newly established entity by the name of the Environmental Studies Group. It was a stuff-bag of diverse pieces of research that formerly had been reporting directly to the ERL Director, cluttering his organizational chart and distracting him from larger concerns. ESG comprised a climate research group, busily developing and curating the COADS data set; a weather research program area, studying warm-season mesoscale convective storms; a Weather Modification Program Area (a Congressional earmark); and PROFS, an R&D program actively shaping development of the 1980’s NWS Modernization and Associated Restructuring (and now imbedded within NOAA’s Global Systems Laboratory). In aggregate, ESG was larger than any of the formally titled laboratories, both in terms of budget and staff. However, I was not SES. I was a lowly GM-15[1].

The laboratory directors and OAR management had spent the first full day hearing from NOAA management on agency priorities; highlighting ongoing research programs and budgets; scheduling the mandated periodic reviews of the individual laboratory activities; considering possible budget initiatives to propose for the coming year, and more. At the end of the formal day, they adjourned to another conference room in the meeting hotel, there to enjoy a dinner, followed by an evening session. This latter was more relaxed, contemplative and forward looking – discussion of other-agency R&D; the national and international political scene; etc.

One topic that came up was the abiding challenge of maintaining and improving coordination and  cooperation with the other, more operational NOAA Line Offices (NWS, NOS, NMFS and NESDIS –dealing with weather, ocean, fisheries, and satellite services respectively). The problem was framed as the challenge of balancing the long-range, basic-research mission of the laboratories versus the short-term, often crisis-driven priorities and concerns of the service organizations. A real conundrum.

As a distinctly junior member of the team, I’d been silent most of the day, but at this moment I thought I had something to offer. Taking a deep breath, I spoke up: “You know, when we are considering promotion of one of our individual researchers, we ask for six letters of reference from academic researchers in the same field. We seek feedback on creativity and originality, the presence or absence of guidelines on their work, their national and international reputation, etc. One key question the academics are asked is this: ‘at your university, would you would consider this government researcher to be at the level of full professor (if we were considering a promotion to GS-15)? An associate professor (if we were considering promotion to GS-14)? Or assistant professor (GS-13)?’”

I continued: “We know in our hearts most academics look down on government researchers anyway (at least this was true in the 1980’s). And how much new information does that sixth faculty letter supply on the margins? How about we replace that sixth letter with the requirement that our researcher in question get one letter of similar assessment/support from someone in the one of the service line offices of NOAA? We could ask whether this person’s work has been useful to their past and present services? Whether they consider its continuation critical to Line Office goals and efforts? Just one letter out of the six! The effect of this on the work and priorities of our researchers would be essentially instantaneous. In the same way they’d been cultivating a support group among academics, they’d start to build a following somewhere in or across NOAA. And this change in behavior would be accomplished at zero cost.”

As I said, the laboratory directors were a senior, strong, fiercely independent bunch. Some were members of the NAS or the NAE. Some had honorary degrees. They weren’t shy or timid. They had the strength of their convictions. As a result, whatever the topic throughout the day, they had been arguing – no, let’s say “animatedly debating”.

Maybe it was the end of the day. Perhaps the convivial dinner conversation had had its effect. But, whatever the reason, my humble suggestion brought them together, created unity, for the first time since the sun had come up that morning.

They all quickly agreed it was the worst idea they’d ever heard.

________________________________________

As you might guess, my idea had no impact, at least at the time. Today, with emphasis on or application readiness levels, as adopted not only by NOAA but other agencies, and additional metrics, the problem may have taken care of itself. But to me, the idea still seems sound.

And speaking of impact (the subject of this post), as you can also tell, the incident didn’t have a negative impact on me personally. I wasn’t wounded. I’m not still talking about it forty years later. I’m not still nursing a grudge…  😊


[1] Management blamed the ceiling on SES slots; initially I chafed at this, but it turned out to be a huge positive. During that period of years, when it came time for annual performance reviews, my leaderhip was compared against SES’ers, and my performance was always deemed pretty good for a GM-15.

Posted in Uncategorized | Leave a comment

Boom! David Guston reflects on impact.

“Everybody talks about impact, but nobody does anything about it.[1]

Have admired David Guston and his political science/policy research from afar. You should too – but probably from way closer-up.

By way of helping you decide whether you want to do just that, consider one of his recent publications – a short article in the Summer 2024 edition of NASEM’s Issues in Science and Technology, entitled What We Talk About When We Talk About Impact. This is by no means his most significant work (his c.v. lists half a dozen books, myriad book chapters, over thirty peer-reviewed publications, and an extensive body of editorial pieces, spanning a broad range of topics). But like his other perspectives, this is robustly structured, meticulously crafted, a fascinating read – and addresses an important subject – impact. We all want our work to matter – and matter more. David Guston shows us how.

Impact has a long history, and even a prehistory (before there was the word, there was impact – think Chicxulub). With great oversimplification, “impact” became a thing for scientists when science became expensive and started depending on financial support from the public – mostly non-scientists. Scientists had to start explaining the value of their work to people who by and large were making less money than scientists and struggling to make ends meet. Immediately after World War II, the value of science and technology was obvious. The atomic bomb, radar, and penicillin had helped the Allies win the war. There was a honeymoon period. Scientists said, “give us lots of money and don’t ask too many questions or interfere, and someday you’ll be glad you did.” Remarkably, both sides to this social contract kept their part of the bargain (and benefited commensurately) for decades.

But as costs have mounted and competing claims for public funds emerged, the bloom has come off this rose. One milestone: in the late 1990’s the National Science Foundation required proposals to address not only the intellectual merit of the proposed work but also the broader impact. Scientists had long accepted the need to justify the former but at first resisted and have never really embraced the latter.

You could see that problem coming. An on-the-ground-experience from the 1970’s: I was a branch chief in NOAA’s Wave Propagation Laboratory. We developed remote-sensing techniques for observation and study of the atmosphere. We were good at what we did and knew it. We had swagger. We had a great laboratory director – C. Gordon Little, a man of scientific acumen, integrity, and vision. Morale was generally high, but there were a few rumblings about rank and promotion and fairness. In response, Gordon decided that each of us (130 strong) should anonymously rank every lab employee, including ourselves, on “their net positive impact (otherwise undefined!) on the laboratory.” He decided further that the employee survey results and the branch chief results (there were seven of us) should be separately tabulated.

We all thought (especially the handful of us in management) his idea insane. The lab groups were diverse (acoustic-, optical-, radar-, radiometry- atmospheric studies, etc.). The work ranged from remote-sensing theory to technique development to application in research to tech-transfer-to- the-service-providing-elements of NOAA or other federal agencies. Some WPL groups and employees were largely NOAA-funded; others funded primarily by other agencies. Absent any specificity to the “impact” criterion, the rankings would be entirely subjective. The results would surely prove divisive and tear the laboratory apart! But Gordon persisted.

The results were amazing – almost miraculously so. The 130 employees, including the managers, were ranked. The variances for the rankings were miniscule. No one had a variance greater than one position up-or-down in the rankings. The managers were high or at the top of the heap; and there was no detectable difference between the managerial rankings of the staff and the overall peer ranking of the staff. The mood at the debriefing was remarkably celebratory[2].

But – germane to the broader impacts discussion here – in the Q&A near the end of the meeting, Gordon was asked how he would evaluate any work on the tech transfer element of the laboratory’s mission. He casually remarked – “well, by definition, we would wait until there had been actual uptake of a technology by some NOAA services line office or another federal agency.”

Perhaps a dozen members of the lab were immediately dismayed. They knew full well that this process took years, not months, if it happened at all – and was subject to the vicissitudes of politics and funding and wholly out of WPL’s control. Gordon quickly walked back his statement, but some damage was done. At least one of our rising stars in such work left the lab to take a job in another institution not long after.

To use a football analogy (after all, it’s that season), this was a fumble on the five-yard line.

Back to David Guston.

David Guston provides much-needed help. He lays out a thumbnail history of impact. He identifies ways research has impact: directly shaping policy goals and language; changing general thinking;  education and training of professionals; interaction with lay knowledge (offering explanatory detail on each of these).

But it’s the evaluation, the measurement of impact that he gives the most attention. He points out the many difficulties of evaluation, but instead of throwing up his hands in dismay he provides a framework that allows approach to the challenge with the same discipline scientists apply to their core research areas. He gives attention to the pathway between the research in questions and societal outcomes (a so-called knowledge value connective or KVC). And he provides an impact catechism (and links and references to a universe of proto-catechisms, themselves of interest) – a set of questions to be applied to the research:

  • What kind(s) of impacts (category/type) are you aiming at?
  • What scope (extensivity) and depth (intensivity) of impact are you planning for?
  • What specific audience(s) are you addressing or constructing?
  • What (causal) model do you have in mind for creating impact?
  • How are you creating opportunities for impact?
  • Who or what (KVC) connects your outputs to impacts and outcomes?
  • How are you participating in, researching, or keeping track of (intermediate) impacts along the way?
  • How will you tell the story of the impact that you have with humility and accuracy?

And speaking of humility (as in the last bullet), Guston doesn’t present these questions as prescriptive, let-alone holy writ – he’s merely saying that if as a starting point researchers ponder these questions as applied to their work, if faculties and other research groups discuss them, out of those conversations will flow a more robust understanding of “impact,” its worth as a concept, and the worth of research in light of that concept[3].

A kind of DIY approach to building and articulating your impact in a way that will allow you to stand tall versus hem-and-haw.

The idea bowls you over, doesn’t it?


[1] With apologies to Charles Dudley Warner, who made the same observation about the weather (and Mark Twain, who usually gets the credit for it).

[2] In hindsight, I think this positive result has a lot to do with the attention lab management gave to promotions and career development – unmatched by any other group in my professional experience. We dealt with promotions at least quarterly. Branch chiefs would prepare promotion packages, promoting, say, someone from GS-12 to GS-13. There would be a group discussion of the individual case. Then there would follow two discussions: is this the GS-12 from the lab we would choose to be promoting at this time? Where does this person rank relative to the current GS-13’s? The process put a spotlight on branch chiefs who were either too slow or overeager to recognize their talent. Branch chiefs out of line with consensus would be encouraged (sometimes told) to either hold back an action for a quarter or put forward an overdue action at the next quarter.

[3] And perhaps an improved catechism

Posted in Uncategorized | Leave a comment

Science diplomacy. A forecast

Mention science diplomacy to a geoscientist or a social scientist focusing on Earthly matters, and you’ll likely bring to mind the Intergovernmental Panel on Climate Change or IPCC (consider, e.g., the reader’s comment with respect to the recent LOTRW post revisiting this topic[1].)

And justly so. Since 1988 this United Nations body has produced “regular assessments of the scientific basis of climate change, its impacts and future risks, and options for adaptation and mitigation.” The multiple efforts have involved thousands of scientists, diplomats, and national leaders as authors and reviewers. They’ve galvanized world action on climate change – and garnered the 2007 Nobel Peace Prize along the way.

But we are likely on the threshold of far larger diplomatic and global efforts stemming from science advance. Consider just a single example: the United Nations’ Global Digital Compact. Here’s some background, taken verbatim from that website:

Following the political declaration adopted at the occasion of the United Nations’ 75th anniversary in September 2020, the Secretary-General in September 2021 released his report Our Common AgendaPDF. The Common Agenda proposes a Global Digital Compact to be agreed at the Summit of the Future in September 2024 through a technology track involving all stakeholders: governments, the United Nations system, the private sector (including tech companies), civil society, grass-roots organizations, academia, and individuals, including youth.

The Global Digital Compact is expected to “outline shared principles for an open, free and secure digital future for all”. The Common Agenda report suggests issues that it might cover, including digital connectivity, avoiding Internet fragmentation, providing people with options as to how their data is used, application of human rights online, and promoting a trustworthy Internet by introducing accountability criteria for discrimination and misleading content. Find out more here.

(That final link is to a PDF file that offers a bit more detail. Tellingly, it makes a direct comparison to the UN work on climate change, referring to the two issues as seismic shifts that will shape the 21st century.) The Wikipedia article on the Global Digital Compact provides additional context, including a list of key aspects:

  1. Connectivity: Ensuring that all people, including schools, have access to the internet and digital tools for connectivity and socio-economic prosperity.
  2. Internet Fragmentation: Preventing the division and fragmentation of the internet to maintain a unified global digital space.
  3. Data Protection: Providing individuals with options for how their data is used and ensuring their privacy is respected.
  4. Human Rights Online: Applying human rights principles in the digital sphere, including freedom of expression, privacy, and protection from discrimination and misleading content.
  5. Artificial Intelligence Regulation: Promoting the ethical development and use of artificial intelligence in alignment with shared global values.
  6. Digital Commons: Recognizing digital technologies as a global public good and encouraging their development and use for the benefit of all.

Whew! Giving the attention needed to any individual topic, considered on its own, constitutes a heavy lift. In aggregate, the work is truly daunting. And note that the Compact is limited entirely to the peaceful use of digital science and technology. Diplomatic activity with respect to digital threats is conducted under the label of cyber security. UN activity here is the province of a separate Office of Counter-Terrorism.

Bottom line? It doesn’t take much imagination to see that the task of sorting out all the ramifications of digital science and technology for diplomacy requires urgent attention from large numbers of scientists and technologists of every stripe, from every country, from governments, private sector, and civil society.

This is a two-edged sword. For scientists who love the discipline of their science but wish the work offered more and deeper interpersonal contact and relationships on a daily business, a host of fulfilling, meaningful careers beckon. Science diplomacy is an opportunity.

But for scientists and engineers of the more traditional, discipline-focused, academic sort, the need for international attention to and regulation of digital science and technology imposes an additional overhead. Researchers already spend too much of their time on academic red tape, constant proposal writing, the special problems of foreign students, and more. For researchers, science diplomacy is a burden.  

And climate change and digital science don’t by any means exhaust the need for science diplomacy. It overlays every global endeavor and aspiration: food-, water-, and energy resources; public health; global commerce; etc.

Science diplomacy, much like science itself, appears to be an endless frontier.


[1] You can find some of the LOTRW posts on this topic from previous years here.

Posted in Uncategorized | 1 Comment

Science Diplomacy “…and furthermore…”

The previous LOTRW post reflecting on science diplomacy triggered two responses that prompt this brief postscript.

The first came from a social-scientist/Facebook friend via Messenger. My colleague opined:

The concept of what is valid academic scholarship and research must change first. At [a certain academic institution], my chapters and monographs for the IPCC, WMO, and WHO were not considered academic enough for promotion. I saw this also in national scientific meetings. I believe all scientists should take a course in the philosophy of science.

Possibly misinterpreting the message, but it clearly came from a place of deep pain. Think my colleague was saying that to expect academic scientists to enter the diplomacy arena in a serious and sustained way requires that an appropriate recognition/reward structure for such work must be put into place within the academic community.

From his lips to God’s ears!

His observation was eye-opening, as so many of the thoughts he’s shared over past decades. Triggered a rush of thoughts. To start, university failures to acknowledge and adequately reward an academician’s broader impacts indeed frustrate faculty, especially those whose research work is fertile ground for such practical use. This problem is universal – not confined to any one institution. It surfaces as well at the NSF level – in the form of a requirement that proposals address not only intellectual merit versus broader impacts. This has come in for its own share of criticism (here’s one example). But this particular problem may be low on any fix-it lists for university leaders, who are struggling to garner research dollars in the face of larger financial uncertainties, including constrained or declining state funding; reduce barriers to recruitment and retention of international students; ferret out and remove racism in curricula and on campus; and more. This while they’re attempting to stem the bloating of university administration – and at the same time retain their own jobs.

Not to mention the hollowing-out of entire academic departments – notably computer science, data science, and AI. Faculty in these disciplines find the salaries and reward structure for such work far more attractive in the private sector, even as incoming students clamor for cutting-edge curricula.

This latter challenge reminds me. In writing the post, I’d been thinking of science diplomacy as a career for Ph.D. scientists of every stripe – positions embedded in the U.S. State Department, or in the myriad international offices of federal- and state-level government offices, or the private sector – versus a sideline in academic work. My apologies for failing to clarify that.    

The second comment on the post came from John Plodinec. Quoting him here:

Another definition: Ambrose Bierce defined diplomacy as “the patriotic art of lying for one’s country.”

While I grant that we have encouraged people to specialize in STEM topics, I don’t agree that that is why “public trust in Science is declining.” To me, it is much more likely that the arrogance of many scientists AND the increasing slide from objectivity to activism has soured the Public. Too many of us overly appreciate our islands of knowledge while ignoring the vast oceans of our ignorance.

Well said, John! Regular readers of the blog may recognize this isn’t his first comment over LOTRW’s fourteen years. His thoughts are insightful and at the same time refreshingly blunt. My earlier Walter Trumbell quote suggests I spent perhaps more time than prudent caroming through the rabbit hole of diplomacy quotes, and yet that piquant Ambrose Bierce quote escaped my notice. Really captured the idea! As for personalizing the reason for declining public trust in science, that’s not a matter of either-or. If many scientists (particularly of his and my generation) are arrogant, that “stems” in part from the Sputnik-era’s idea that STEM needn’t have been for everyone; the public education system could focus attention on a limited (and therefore in some sense elite) few students of the right inclination and some natural ability. Events since have highlighted the dangers and inequities of such a policy approach.

As for the slide from objectivity to activism, this represents a real and abiding problem for science and scientists. I blogged on this back in August of 2018, and have touched on the fringes of the subject repeatedly before and since. For example, as I noted in EOS in 2015,

In Earth sciences, our proposed social contract sounds dangerously close to this: “We’re in the business of documenting human failure. But lately, the speed, complexity, and magnitude of that failure has picked up—with respect to management of natural resources, environmental stewardship, and hazard risk. If our documentation is to keep pace, we need more funding.”

To a beset, struggling general public this can easily look unhelpful, even arrogant. In today’s polarized and beleaguered society, that’s dangerous.

But if we “slide into activism” (and by the way, this phrasing itself contains a whiff of arrogance) and are artless, ham-fisted about it, we deserve all the criticism we get. The best we can hope for is that most of us, and the majority of our work, focus on the science – but that at least a few of us, and some of our work – clearly labeled as non-science, but in some way science-informed – attempt to make the connection to real-world needs.

In closing, I’d like to call everyone’s attention to a really interesting column by David Brooks, writing  in the New York Times, entitled You’re only as smart as your emotions. His perspective deserves a read and reflection on the part of academics, both faculty and administrators, and scientists of every stripe contemplating engagement with the larger society. His opening, to whet your interest:

If I were asked to list the major intellectual breakthroughs of the last half-century, I would certainly include the revolution in our understanding of emotion.

For thousands of years, it was common in Western thought to imagine that there was an eternal war between reason and our emotions. In this way of thinking, reason is cool, rational and sophisticated. Emotions are primitive, impulsive and likely to lead you astray. A wise person uses reason to override and control the primitive passions. A scientist, business executive or any good thinker should try to be objective and emotionally detached, kind of like a walking computer that cautiously weighs evidence and calculates the smartest way forward.

Modern neuroscience has delivered a body blow to this way of thinking. If people thought before that passions were primitive and destructive, now we understand that they are often wise. Most of the time emotions guide reason and make us more rational. It’s an exaggeration, but maybe a forgivable one, to say that this is a turnabout to rival the Copernican Revolution in astronomy.

You know you want to read more! And you should do so.

Posted in Uncategorized | Leave a comment

A few reflections on science diplomacy.

Then he said to his disciples, “The harvest is plentiful, but the laborers are few; therefore pray earnestly to the Lord of the harvest to send out laborers into his harvest.” – Jesus (Matthew 9:37-38 NIV)

Diplomacy frequently consists in soothingly saying “Nice doggie” until you have a chance to pick up a rock – Walter Trumbull[1].

A recent AGU announcement of an upcoming (August 20th; that’s this Tuesday!) webinar on science policy skills caught my eye. Turned out it’s part of a Global Policy Webinar Series they’re offering. Scientists might want to give the topic some thought. Here’s a definition of diplomacy more broadly, to get things going:

  • the profession, activity, or skill of managing international relations, typically by a country’s representatives abroad.
  • the art of dealing with people in a sensitive and effective way.

It’s but a short step to zero in on science diplomacy. Googling this term provided this material courtesy of generative AI:

Science diplomacy is the use of scientific collaboration and exchanges to help achieve diplomatic goals in international relations. It can involve scientists and scientific organizations working together across borders, or nations coming together to negotiate agreements. Science diplomacy can take many forms, including: 

  • Science for diplomacy

Using science as a soft power to build goodwill between nations and advance diplomatic goals. For example, scientists might collaborate on multi-national projects in physics or astronomy, and their nations would then negotiate agreements on financing and management. 

  • Science in diplomacy

Using science to directly support diplomatic processes. For example, scientists might provide evidence and advice to inform decision-making in foreign and security policies. This can help ensure that global policy efforts and foreign policies are informed by scientific evidence. 

  • Diplomacy for science

Using science to help in times of political strain. For example, joint research efforts can help nations keep talking and build trust when their political relations are strained. 

Science diplomacy has occupied the minds of scientists and national political leaders for quite a while – dating back, say, at least to Leonardo da Vinci. Wikipedia’s article on the man provides a fair amount of detail. In 2019, Australian historians Susan Broomhall and Joy Damousi provided a sparkling little piece entitled How Leonardo da Vinci made a living from killing machines. Da Vinci made his science and inventions for war available to multiple city states over his lifetime. All these transactions were accompanied by diplomacy.

Of course, the roots of diplomacy itself go back much further. An account, one of many, from the Bible: Early in King David’s reign, he reached out to a king of the Ammonites whose father had just died. The diplomatic act was misinterpreted (or was it?) by that king’s advisers, with tragic results  for the Ammonites, and for their allies, the Arameans (2 Samuel 10).

Some reflections:

The harvest is plentiful. Science diplomacy became a thing during the Cold War because science and technology (notably the atomic bomb, radar, and penicillin) played a pivotal role in the outcome of World War II. It’s even more consequential today. Nations are racing to advance and harness AI; energy-, agricultural-, water- and technologies; and biological and medical science. The outcomes will not only shape geopolitical security and the overall welfare and future prospects for humanity as a whole, but also the allocation and distribution of those benefits across the eight billion of us. The National Academies have been asked several times to provide advice on the topic – resulting for example in the 2015 study: Diplomacy for the 21st Century: Embedding a Culture of Science and Technology Throughout the Department of State.

The laborers are few. If science diplomacy is so consequential why do so few scientists and engineers enter the field? Some possible explanations (you may readily form your own superior list). Start with the superficial. The U.S. Foreign Service at one time had an S&T career track but then did away with it. On a deeper level, the reward structure (salaries, promotion, reputation, and more) for scientists and engineers favors staying within discipline. And though scientists and diplomats are both in the business of problem solving (and therefore both rely on observation, critical thinking, and logic), they approach problem solving in quite different ways. (With enormous over-simplification) scientists rely on experiment, curiosity, logic, creativity, skepticism, objectivity, and conflict-resolution-through-peer-review. Diplomats need these same qualities, but place emphasis on adaptability, empathy, patience, and conflict-resolution-through-compromise. Loyalty to country also matters.

Add this: Scientists can make a hundred mistakes, but so long as none of these make it into print, a much more limited number of successes can make their careers golden. By contrast, diplomats can make a single mistake and (like those Ammonites and Arameans) never get another chance to put things right. “Move fast and break things” doesn’t work at the State Department.  

Therefore pray earnestly to the Lord of the harvest to send out laborers into his harvest. But it’s not a simple matter of disparate skill sets or approaches. Da Vinci’s life experiences and the Walter Trumbull quote remind us that diplomacy lies at the razor’s edge separating the most high-minded of human ideals (the welfare of another; cooperation; working together to build a better world) and naked power. Diplomacy is a quite different matter for haves- and have-not nations. For the former, diplomacy is a choice (we’d win if we went to war over this, but we’d rather talk it though). For the latter diplomacy looks different (we know you’d win if we went to war over this, but it’s a moral issue, and after winning the war you wouldn’t be able to look yourself in the mirror, or hold you head high in the assembly of other nations).

Scientists and engineers might be forgiven for feeling uncomfortable near this precipice. Bad enough to see nations’ ravenous appetite for applying S&T to the tools and practice of war.

Science diplomacy should therefore not be taken lightly. But after deep thought (including, perhaps, prayer?) some might find themselves called to enter the arena.

To close: Science diplomacy is not simply an international matter. Scientists (and engineers) have a diplomatic problem at home, in-country. Since Sputnik, the U.S. educational system has fostered STEM education for a favored few versus vigorously encouraging that for all. To our detriment, this elitism (a reality, even though not an avowed policy) has over time created an artificial but visible divide between scientists and the public at large – a culture of we-they versus building a culture where we are one-and-all “scientists.” At the same time (and partially as a result), public trust in scientists is declining. Going forward, scientists need a more empathetic, considerate, respectful outreach to others – more diplomatic outreach – in many cases, starting with our own families and neighbors.


[1] This quote is usually attributed to Will Rogers, but Quote Investigator tells us one Walter Trumbull, a sportswriter, is the more likely source. Sigh. Is no attribution sacred? (Well, perhaps, literally, the quote attributed to Jesus might be correctly so.)

Posted in Uncategorized | 1 Comment

Focus, people!

Whistler’s Arrangement in Grey and Black No.1 (you may know it by another name). File information

A week ago a bout of insomnia found me web-browsing and coming across this challenge in the New York TimesTest Your Focus: Can You Spend 10 Minutes with Just One Painting?

The authors, Francesca Paris and Larry Buchanan, started out in this vein:

OUR ATTENTION SPANS may be fried, but they don’t have to stay that way.

In a modest attempt to sharpen your focus, we’d like you to consider looking at a single painting for 10 minutes, uninterrupted.

Our exercise is based on an assignment that Jennifer Roberts, an art history professor at Harvard, gives to her students. She asks them to go to a museum, pick one work of art, and look at only that for three full hours.

We are not asking for hours. But will you try 10 minutes?

At Roberts’ suggestion, the authors chose Nocturne in Blue and Silver, by James McNeill Whistler – a painting that happens to be in the Harvard Museum.

From my point of view, they made a felicitous choice. I’ve long been slightly familiar with Whistler for a couple of reasons. To start, Whistler was for a brief time employed by the U.S. Survey of the Coast, the progenitor of NOAA’s National Ocean Service. His tenure had once been the subject of an article in an in-house NOAA publication (decades ago). Then, a few years after I’d moved from Boulder to DC the Freer Gallery of the Smithsonian had a Whistler exhibition that I attended. The Wikipedia article provides more material on Whistler and his life (much of it colorful): Whistler’s childhood, how he washed out of West Point, his unsatisfying Coast-Survey stint, Charles Lang Freer’s patronage, and the Freer Museum’s Peacock Room (one of Whistler’s creations), his extensive series of Nocturnes paintings (many of which were displayed during the exhibition), etc. Incidentally, the Nocturnes are amazing! (Click here to see an array.) It’s a shame he’s remembered primarily for his Arrangement in Grey and Black No.1

(A word to the wise. Taking a few minutes to read the Wikipedia article will help you retain focus on the painting for the full ten minutes and enrich the entire exercise.)

Focus is not just a choice but an important life skill – even in the tumultuous, chaotic 21st century.

Posted in Uncategorized | 1 Comment

GMU’s department of atmospheric, oceanic, and Earth sciences fixes the PhD qualifying exam.

Physics Today’s July 2024 issue provides the happy news, in an article authored by Timothy DelSole and Paul A. Dirmeyer. They begin in this (excerpted) vein:

As senior scientists, we have navigated the challenging waters of the PhD qualifying exam—both as students taking it and as professors administering it. As students, both of us excelled academically, yet we anticipated the qualifying exam with anxiety and dread. How could we not? The professors judging us could inquire about any aspect on which they were expert…

We recall preparing diligently for specific topics about which we were never queried, and thus we were unable to showcase our extensive preparation. We recall knowing the answers to some exam questions in retrospect, but in the pressure of the moment, we couldn’t remember them. What’s more, passing the qualifying exam left us no closer to defining our thesis research direction.

Later we discovered that professors also approach the qualifying exam with anxiety and dread. The consequences of the exam put immense pressure on professors to craft questions that can accurately gauge a student’s potential

And the fact that those judgments usually rest with a few faculty members raises concerns about fairness and the inclusion of diverse viewpoints

They identify holes in the common justifications for the traditional process, then raise some counterarguments, before concluding that some kind of testing is needed:

Despite those shortcomings, there remains a compelling necessity for qualifying exams: Experience shows that some students, despite passing their courses, struggle to complete a dissertation within the typical five-year doctoral program. Identifying those students early allows all parties to move forward without investing years of effort into a PhD journey that may ultimately be unsuccessful

Sound familiar? If you are or ever were a graduate student in the physical sciences, you bet it does. Merely reading this and reflecting on those earlier days might even trigger a PTSD episode.

Delsole and Dirmeyer capture the novel, semester-long GMU process (as much a course as it is an exam) in a single flow-chart:

Physics Today, July 2024, page 34

They provide a detailed description in their article (which merits careful study in its entirety): To oversimplify: at the end of their first year’s spring semester, students work under an advisor’s guidance to conduct research aimed at the qualifying course per se (held in the spring semester of second year). From that point on, students’ second-year course load is shaped around the qualifying course, which leads them through an iterative sequence of research, punctuated by frequent presentations to and feedback from faculty and fellow students, and development and submission of a written document (that in the best-case scenario, for some truly exceptional students, will be suitable for publication in a peer-reviewed journal)

Delsole and Dirmeyer observe that

The new qualifying process has several advantages over the traditional format. First, instead of assessing a student’s knowledge, faculty members evaluate the student’s ability to perform the activities critical to scientific inquiry: identifying a scientific problem, devising solutions, and engaging in discourse. Second, the process spans an entire semester, so decisions on a student’s performance are not based on a singular moment. Third, each student chooses their own research topic, affording them the opportunity to showcase their creativity.

Furthermore, a student receives questions tailored to their chosen topic…

…The new qualifying process also offers each student multiple opportunities to succeed. (Two independent faculty panels; an additional written submission)…

…A…student has ample opportunity to revise their work…

…Unavoidably, subjective judgments affect the final decision, and they may be influenced by biases tied to race, gender, sexual orientation, or disability… In the new format, the entire faculty openly participate in the decision-making process, which brings a wider range of perspectives into the discussion.

By distributing responsibility across all faculty members, the new process also lightens the burden on individual advisers, who often hesitate to single out their own struggling students. When a student is redirected, their adviser usually appreciates the collective intervention.

Although the new format requires a greater investment of time from faculty, productive scientists are accustomed to allocating time for conferences and peer-review duties. And the new qualifying process calls for minimal preparation by faculty, with only modest tasks required post-meeting, such as filling out evaluation forms. When it comes to peer-review services, the question is how to best manage one’s time reviewing others’ work. Allocating a portion of that time to assisting students in one’s own department proves to be a sound investment in upholding the quality and integrity of the qualifying process. Ultimately, the efforts produce better student outcomes, which, in turn, cast a positive light on the faculty and the department.

They acknowledge:

Fellow scientists who hear about our qualifying process are often doubtful about its feasibility in their own departments. They cite factors such as a large student population. We are confident, however, that the new process can be tailored to any department. Our PhD program at George Mason has a dozen faculty members and admits three to six candidates per year. For larger departments, splitting students and faculty into smaller cohorts operating in parallel is a feasible solution.

Another concern has been the perceived inefficiency of involving faculty who lack expertise in a student’s chosen topic. But we have found the opposite to be true: Observing how the student articulates their research to nonspecialists, who nonetheless possess broad scientific knowledge, has several advantages. Incorporating diverse expertise in faculty panels, for instance, ensures that a mix of technical and foundational questions will be addressed, which makes the evaluation more thorough.

The new process also encourages faculty to engage with each student out of genuine interest, thus fostering a less adversarial interaction than the traditional approach. The reversal of the conventional roles of teacher and student mirrors what a student will encounter in advanced doctoral research. Furthermore, the shift in dynamic creates opportunities for a student to demonstrate creativity in handling conflicting criticisms that arise from reviewers with different knowledge backgrounds.

One issue that has generated considerable debate among our faculty is the grading policy. Currently, a student who passes the qualifying course receives either an A or a B. The A grade, however, is reserved for students who submit a manuscript that the faculty believes can be refined into a publishable paper after a few months of revision. That’s a high standard, and not all exceptional students meet it.

We believe that a significant distinction exists between a student who develops a nearly publishable paper in their second year and one who does not, and the grade assigned to each one is intended to reflect and reward that difference. Moreover, the standard is attainable: One or more students achieve it each year.

The authors and their GMU department have accomplished something truly extraordinary. They haven’t merely “fixed the qualifying exam;” they’ve transformed the graduate experience for both students and faculty. They’ve created (restored?) a new balance between research and teaching in the department. They’ve built true community – among the individual researchers, as well as between faculty and students. Issues of fairness and diversity have been addressed, not as artificial add-ons, but as an integral, natural part of the process. They’re not merely tabling a novel but untried idea; they’re reporting on its successful implementation.

And they’ve shared all this in a concise, comprehensive paper that is a model of clarity and compelling in its logic.

Bravo!

This novel GMU approach deserves to be widely copied and emulated by other geosciences faculty (and students!) on other campuses – and by other disciplines more broadly.

_______________________________________________

An aside on my personal graduate-school experience at the University of Chicago in the 1960’s: After graduating from college with a degree in physics in 1964, I started work at Chicago’s Institute for the Study of Metals for the summer, and then entered the University’s graduate physics program in the fall. After one year of physics I transferred to the department of geophysical sciences in the fall of 1965 and graduated with a PhD in the summer quarter of 1967. That trajectory was enabled greatly by my thesis advisor, Colin Hines (I’ve thanked him profusely and inadequately in a 2020 LOTRW post, on the occasion of his passing).

But the other major factor was the difference between the physics qualifying exam and its counterpart in the geophysical sciences. Back then, the physics qualifying exam demanded that students master that vast field in its entirety. By contrast (and I didn’t know this when I transferred – due-diligence about such things was not one of my strong suits), the department of geophysical sciences, though dealing with a far more limited literature and history, required of students only that they demonstrate their ability to learn facets of the science when and if they had to. Students worked with faculty to negotiate three fields of study on which they would be examined in several months’ time. The fields were chosen to be of some relevance to the student’s intended Ph.D. research. Faculty assumed that if students could demonstrate a certain level of proficiency in those areas, they could attain similar levels of mastery over other subjects as needed during their Ph.D. research or over their subsequent careers. The qualifying exam was linked to the student’s development of a thesis prospectus and standing for examination on that. As I recall, I was examined on ionospheric physics and chemistry, upper atmospheric dynamics (general circulation, atmospheric tides, gravity waves, etc.), and hydromagnetics (the general dynamics of conducting fluids).

Not by any stretch the polished, robust GMU approach, but containing early hints of something similar.

Posted in Uncategorized | Leave a comment

AI impact on global energy demand. Further incentive to think like the Wright brothers.

On July 11 my InBox contained this contribution from a New York Times (subscription) service, a thoughtful piece by David Gelles entitled A.I.’s insatiable appetite for energy. He cited an interview that he and other reporters had held some weeks prior with Bill Gates on aspects of climate change. He mentioned that Gates was “surprisingly sanguine” when it came to AI’s energy use, while noting that Gates had billions of dollars in vested interest in the issue. Gelles provided a nice overview of that particular AI challenge, covering the amounts of energy involved, the sources of energy that were available or contemplated, moves made by the tech firms themselves to minimize this growing energy impact, and more. Along the way he cited an estimate by Goldman Sachs to the effect that data centers would constitute 8% of US energy use by 2030, compared with only 3% today.

The Goldman Sachs piece saw the increasing energy demand as an unappreciated opportunity for investors in the energy sector (fossil fuels as well as renewables), noting: While investor interest in the AI revolution theme is not new, we believe downstream investment opportunities in utilities, renewable generation and industrials whose investment and products will be needed to support this growth are underappreciated.

The Times is not alone in devoting (virtual) ink and directing eyeballs to the problem. Back in April, The Economist ran a piece entitled Generative AI has a clean-energy problem, concluding:

If shortages of renewable energy occur, it will come at a cost. No one knows yet how generative ai will make money. What people do know is that the cost of acquiring gpus [graphic processing units] is rocketing. If the energy costs of running them soar, too, it could put the brakes on expansion. In addition, the electrification of the rest of the economy is highly cost-dependent; an ai v ev scramble for clean power would push up prices and serve neither industry well. By all means keep your fingers crossed for Mr Altman’s [Sam Altman, the OpenAI entrepreneur] rose-tinted fusion dream to become reality. But don’t count on it

Other examples abound. A check on your favorite news source, whatever it may be, will almost certainly surface similar material.

A serious problem? Or not so much? I decided to go to the source.

I asked the ChatOnAI app on my cellphone: will AI be just another drain on the world’s electrical energy? Or will it help us save more energy than it uses?

ChatOnAI was on the fence: AI has the potential to both consume and save energy, depending on how it is utilized. While AI systems require a significant amount of computational power and electricity to function, they can also help optimize energy usage and reduce waste in various industries. For example, AI can be used to improve energy efficiency in manufacturing processes, optimize transportation routes to reduce fuel consumption, and enable smart grids to better manage and distribute electricity. With careful implementation and sustainable practices, AI has the potential to help us save more energy than it uses.

Hmm. I followed up: Since AI is using so much energy, should we slow down its implementation?

ChatOnAI didn’t exactly rise to the bait: The energy consumption of AI is indeed a concern, as it requires significant computational power. However, there are also efforts being made to develop more energy-efficient AI models and infrastructure. Slowing down its implementation entirely may not be necessary, but it is important to continue researching and adopting sustainable practices in AI development to minimize its environmental impact.

I tried to push it a bit further: Wouldn’t that also reduce or slow the stream of ai benefits?

ChatOnAI was bland but soothing: While adopting sustainable practices in AI development may require some adjustments and considerations, it does not necessarily mean a reduction or slowdown in the stream of AI benefits. By prioritizing energy efficiency and exploring alternative energy sources, we can continue to leverage the benefits of AI while minimizing its environmental impact. Additionally, advancements in AI technology can also contribute to finding innovative solutions for energy efficiency and sustainability in various industries. It is about finding a balance between reaping the benefits of AI and being conscious of its environmental implications. [1]

(Hmm. From my perspective, my cellphone AI has all the emotional range of Leonard Nimoy’s Spock in the old Star Trek series. On the other hand, I get the sense my cellphone AI doesn’t exactly see me as the most stimulating conversationalist either. Need serious work on my prompting.)

Bottom line? The AI-energy use problem comes down to a matter of control: whether we control AI, or AI controls us, bringing us back to the Wright brothers. This issue of control matters at several levels. At the individual level, do we get so absorbed in generative AI that we spend inordinate time merely playing with it? Or do we harness it to our day job? At the societal level, AI uptake by some sectors – tech, educational, finance, healthcare, and retail – probably represent net consumption of energy. AI application to manufacturing, transportation, and agriculture sectors may lead to varying degrees of energy conservation. Is AI uptake by the power sector, with its potential for energy savings, an early adopter in this race? Or a slow plodder? And how rapidly can energy-saving AI ideas be implemented in practice across the gargantuan, high-inertia energy sector? Somehow it feels like the energy impact of AI should ultimately be a net savings (the Gates-Altman view), but in the short-term AI-use will add to the emissions problem. It would therefore seem to be to global benefit to accelerate the business-as-usual investment in AI application to the energy sector.

In any event, expect the AI impacts to shape long- and short-term emissions scenarios for future IPCC assessments.    


[1]Those interested in putting some flesh on these bones might take a look at the International Energy Agency (IEA) Electricity 2024 report or this IEA 50 piece explaining  Why AI and energy are the new power couple. An excerpt:

Power systems are becoming vastly more complex as demand for electricity grows and decarbonisation efforts ramp up. In the past, grids directed energy from centralised power stations. Now, power systems increasingly need to support multi-directional flows of electricity between distributed generators, the grid and users. The rising number of grid-connected devices, from electric vehicle (EV) charging stations to residential solar installations, makes flows less predictable. Meanwhile, links are deepening between the power system and the transportation, industry, building and industrial sectors. The result is a vastly greater need for information exchange – and more powerful tools to plan and operate power systems as they keep evolving.

This need arrives just as the capabilities of artificial intelligence (AI) applications are rapidly progressing. As machine learning models have become more advanced, the computational power required to develop them has doubled every five to six months since 2010. AI models can now reliably provide language or image recognition, transform audio sounds into analysable data, power chatbots and automate simple tasks. AI mimics aspects of human intelligence by analysing data and inputs – generating outputs more quickly and at greater volume than a human operator could. Some AI algorithms are even able to self-programme and modify their own code.

It is therefore unsurprising that the energy sector is taking early steps to harness the power of AI to boost efficiency and accelerate innovation. The technology is uniquely placed to support the simultaneous growth of smart grids and the massive quantities of data they generate. Smart meters produce and send several thousand times more data points to utilities than their analogue predecessors. New devices for monitoring grid power flows funnel more than an order of magnitude more data to operators than the technologies they are replacing. And the global fleet of wind turbines is estimated to produce more than 400 billion data points per year.

This volume is a key reason energy firms see AI as an increasingly critical resource. A recent estimate suggests that AI already serves more than 50 different uses in the energy system, and that the market for the technology in the sector could be worth up to USD 13 billion.

Posted in Uncategorized | Leave a comment

NASEM’s inaugural State of the Science Address

On June 26, Marcia McNutt, President of the National Academy of Sciences and Chair of the National Research Council, presented what NASEM billed as an inaugural State of the Science Address.

A February press release had publicized the event this way:

“Just as the annual State of the Union address gives Americans a sense of how the nation is doing on key priorities, I hope that the State of the Science address will provide policymakers and the public with a clear picture of the overall direction of the U.S. research enterprise — including its strengths, potential shortcomings, and possible pathways for the future,” McNutt said. “Science has provided the foundation for decades of American prosperity and improved quality of life and well-being. By taking stock of where we are now, we will be better able to steer efforts toward ensuring that our research community can maximize its contributions to our nation and to all Americans.”

Bravo! The event more than lived up to this ambitious billing.

Dr. McNutt, a distinguished geoscientist, gave a crisp, refreshingly data-driven talk. Her summary was followed by a second hour of panel discussion, moderated by Harvey Feinberg, a former President of the National Academy of Medicine and now President of the Gordon and Betty Moore Foundation. The live event was also webcast and is available online here.

Dr. McNutt began with an overview of societal benefits of science – for the nation’s economy, safety, security, health, and quality of life. She also enumerated the benefits of science leadership versus mere uptake: national security, economic growth and stability, the ability to frame global ethical standards for the advance and use of science; and the accompanying contributions to soft power and diplomacy.

She then worked her way through several metrics revealing clear US worldwide scientific leadership since World War II (her word was “dominance”), juxtaposed with data showing that in recent years the US lead has been declining. China and the rest of the world are catching up – even overtaking the US in some areas.

The last half of her talk laid out six challenges the United States faces if it is to retain (or regain) leadership and/or simply match pace with the rest of the world going forward. She also tabled six opportunities corresponding to the flip side of each (given here in italics and parentheses):

  • Build the domestic scientific workforce of the future (improve K-12 education)
  • Continue to attract the best talent internationally (reduce red-tape facing would-be foreign students – and the red-tape burden carried by US academic researchers more generally)
  • Coordinate existing resources for greater impact – especially across public-private-philanthropic lines, as the relative funding across these sectors is shifting (create a truly national research strategy)
  • Modernize university-industry partnerships. This is not simply a matter of funding. Industry profit-seeking objectives differ from the public-good focus of government agencies. Industry is luring away university and government talent, to the long-term detriment of all, especially in the AI-space. (Need to find different forms of university-private sector engagement.)
  • Provide access to major science facilities (inter alia, increase US participation in international science facilities)
  • Build public support for science (cultivate public trust in science).

She put flesh on the bones of each of these. Dr. Feinberg led a lively conversation among the panelists, who brought diverse perspectives (different academic disciplines, private-sector, philanthropy, science communication) to the issue. One name familiar to geoscientists is Marshall Shepherd, who made insightful contributions throughout – in themselves worth the listen.

Some takeaways.

  • NASEM and Dr. McNutt gave the impression that this presentation was not a one-off but instead the first of a series. Though I didn’t see anywhere a commitment to doing this annually, that seemed implied by the comparison to the State of the Union address and is devoutly to be wished. One issue for such a series? The six challenges are inherently multi-year. It will be important to maintain that long-range, big-picture focus and yet show tangible progress (or regress) year-on-year, while keeping the annual occasions fresh.
  • There was a welcome emphasis throughout on improving American public education, at all levels, and for all students, as the foundation. In the 1950’s I was a high-school student when Sputnik awakened American interest in prioritizing STEM education. I got to see and benefit from that firsthand. But as Rush Holt and others have since pointed out, those Cold War initiatives had the fatal flaw of focusing on select students deemed to have potential versus making critical thinking and scientific perspective a feature of American culture and values. There’s the opportunity to correct that failing this time around.
  • A national strategy for science is a superb idea, perhaps even a sine qua non, but it’s vital that it be made truly national, including not just government-, but also academia and the private-sector (as well as philanthropy and the NGO’s) in its formulation and implementation. And, as Dr. McNutt emphasized, it’s important that such a strategy flexibly accommodate serendipity. (Another lesson of the Cold War was that Russian science suffered because it was monolithic – a top-down, command-and-control straitjacket dictated by a single entity, its Academy of Sciences. Scientists who had the poor fortune to run afoul of that Academy’s politics and powerful personalities were cut off from funding or worse. By contrast, the United States funded science through a diversity of sources, allowing scientists who incurred disfavor at one to move to another.)
  • Partnerships work best when they’re not one-sided – when partners pursue common goals, and each brings to the table resources and assets the other partner(s) need. Case in point; it’s not sufficient to note that the private sector is hoovering up the country’s AI workforce, leaving government agencies and academia behind. It’s essential to make a strong case (as Dr. McNutt did) that academic and government work in AI is essential to sustaining private-sector work and focused on that very practical goal – and provide the incentives needed to bring that about.
  • In a similar way, US scientists can expect access to major science facilities worldwide only to the extent that the US itself has invested in such facilities that attract comparable international interest in coming here.
  • Finally, trust in science is most likely to be achieved to the extent that science is regarded as an open and participatory nationwide activity and not some province reserved for a select few. Ideally, every American would self-identify as a participant in science and innovation, not as a mere spectator. Dr. McNutt captured some of this idea when she gave statistics showing that job growth was most rapid in scientific fields – exceeding the total birthrate (!). In the panel discussion, Marshall Shepherd spoke eloquently about busting up the idea of a workforce “pipeline,” which implies one way in, and one way out, in favor of far more open access to science-based careers for a far broader spectrum of the population. Only to the extent that Americans feel real ownership of science will they truly trust it.

In his benediction to the event, Harvey Feinberg complimented Marcia McNutt for her admirable discipline in “barely asking for more money.” He went on to say “We do, however, need more money invested in science,” adding that any national research strategy “would be an empty promise if it does not have the fuel of resources to make it a reality.”

Amen.

Posted in Uncategorized | Leave a comment