Two weeks later, still wrapping my head around that initial LOTRW “interview” with a simple AI chatbot[1].
Some impressions, beginning with the general but then focusing on climate-change science education:
Regarding the big picture, in the case of generative-AI, the expression “game-changer” comes to mind. For me personally, the enthusiasm and sense of wonder evoke memories of several other transitional moments in the same evolutionary tree: switching from handwriting and typing to word processing; from hand-drawn transparencies for professional presentations to powerpoint; substituting library-and book-based study to largely-online search, enabled by search engines (eventually and most notably Google). Each of these arguably made me more productive and effective, but also changed the direction of my work and thought. AI promises to do the same (most likely, for both better and worse).
As in those instances, I recognize I’ll never begin to master this new technology but will only harness the merest fraction of its capabilities. I also dimly grasp that even that level of effort will require a significant reconfiguring of my brain and the way I operate.
Such overhaul isn’t getting any easier with age! A recent Economist article on aging demographics worldwide (in the print edition ominously entitled The Old and the Zestless, and worth reading in its entirety) notes that “Younger people have more of what psychologists call “fluid intelligence”, meaning the ability to solve new problems and engage with new ideas. Older people have more “crystallised intelligence”—a stock of knowledge about how things work built up over time.” But the potential rewards make it worth the effort and dwarf any attendant personal risks. After all, the Roman Cato undertook to learn Greek when he was eighty, and when asked about it supposedly replied it was the only age he had left.
The vanilla-flavor of the current AI-answers leaves no doubt as to their robotic DNA. But today’s rudimentary capability is destined to grow more powerful with time – “learning” to handle and respond to more comprehensive queries on more complex matters. The future clearly belongs to societies, nations, private- and public-sector institutions, and individuals most adept at harnessing AI’s benefits while mitigating its risks. The power of AI and its potential for disruption worldwide could mean the reworking of every aspect of global society – realignment of geopolitics, economies, institutions, and all the way down to personal prospects.
In such a future, information technology poses the two greatest challenges/opportunities for American K-12 public education: (1) preparing young people for careers/lifetimes in an IT/AI-dominated world; (2) harnessing IT/AI to accelerate and augment learning in every discipline.
But here’s the thing: it’s not just that we don’t want to miss such a singular upside opportunity. America’s students succumb daily to IT’s downside risks. On average, today’s teenagers spend almost 9 hours a day online. They’re enticed into the seductive virtual world of gaming, with its absorbing pleasures and near-instant gratification. Their curiosity takes them down countless labyrinthine internet rabbit holes. Social media can transport them to ecstasy and or plunge them into the depths of despair depending on the latest posts from their peers. Each key click can distract them – loosen their focus on the pressing and near-intransigent real-world problems – inequity, poverty, polarization, and yes, climate change – that their generation must address. Taken as a whole, these attributes constitute a mental health threat to the young.
And all this before we get to artificial intelligence (AI) – which looks set to take IT’s pull to an entirely new level. Perhaps most worrisome, the people who know the most about AI seem to be the wariest, even the most alarmed, about where AI may take us. We haven’t seen this level of angst from scientists and engineers since Robert Oppenheimer and his crowd fretted about the development and use of atomic weapons in World War II.
Unsurprisingly, the Educational Enterprise sees reason to fear. But attempts to ban AI from the classroom are unlikely to bring teachers any joy (references to King Canute and the tide come to mind). Perhaps, then, it’s worth exploring the opposite approach.
Which brings us to the climate-change bits. Let’s imagine, for example, that in the classroom of the future pupils will be expected to demonstrate ability to use IT and AI in the same way that students in the past were expected to be facile with reading, writing, and arithmetic. In such a world a science-class module on climate change could ask students to submit capstone individual reports along lines notionally like the following:
- What are the causes and signs of climate change in our local area? What is the future outlook?
- What are the present and possible future impacts of those changes on our community?
- What coping strategies might be necessary? Doable? or desirable?
- Which do you favor and why?
- Explain how you used IT/AI to develop your report and inform your conclusions.
- (for extra credit) where would a bit more understanding of climate science and climate impacts be helpful? [2]
I tried out these questions on ChatBox, using a ham-fisted, simple approach, particularizing them to three counties: Fairfax County in the DC suburbs, where I live today; Boulder County in Colorado, where I worked for twenty years; and Franklin County in rural Tennessee where I lived as a child between 3-8 years of age. Generally, the AI answers seemed factual, nicely organized, coherent, and crisp — providing a useful portal to further study. The exception, unsurprisingly, was Franklin County. But when I substituted the University of the South (where my father had taught mathematics) that is situated there, material turned up.
For a young technology, the AI performed well.
Seductively so. The results were so clearly and simply expressed and so polished that it was easy for me to take them at face value versus question them or build on them to realize deeper insight. And (forgive the snarky comment) in some ways they call to mind Captain Obvious. (I’ve subsequently asked AI for self-criticism — to characterize the shortcomings in its answers. It gives bland acknowledgments of its limitations but then occasionally confesses additional bits of useful detail.)
Three closing thoughts. First, climate-change science was once (and still is) considered a controversial, polarizing issue. From time to time, individuals and groups, even some state governments, have made efforts to keep it out of the classroom. Today there’s more controversy in the classroom than ever. But the battleground has shifted to other, more visceral topics. Climate-change science looks tame by comparison – it’s losing its power to inflame minds. Ironically, it may prove over time to be relatively “safe-place” to test and develop new educational ideas.
Secondly, in the future AI-enabled world, the educational challenge is not how to exclude IT/AI from tainting the educational process, but rather how to ensure that students from every income class and social circumstance have equal access to the tools they’ll need to master for their future lives and careers. A tough nut to crack. In the old days, I’d have just thrown up my hands. But today, I asked my long-time search-engine buddy
How can we make the world more fair?
Got this answer: You can help ensure everyone has access to education, health care, housing, and other basic needs. You can also help vulnerable people worldwide by working on issues like climate change or gender equality, or refugee rights, which came from a single link.
My new friend ChatBox, true to form, was more voluble (again, quoting verbatim):
As an AI language model, I do not have personal beliefs or opinions, but here are some ways that people can work towards making the world more fair:
1. Education: Providing access to education for all individuals, regardless of their socioeconomic status, can help level the playing field and provide opportunities for people to improve their lives.
2. Economic equality: Addressing income inequality and ensuring that everyone has access to basic necessities such as food, shelter, and healthcare can help reduce poverty and create a more fair society.
3. Social justice: Addressing systemic issues such as racism, sexism, and discrimination can help create a more just and equitable society.
4. Political reform: Ensuring that political systems are fair and transparent can help prevent corruption and ensure that everyone’s voice is heard.
5. Environmental sustainability: Addressing climate change and protecting the environment can help ensure that future generations have access to the same resources and opportunities as we do today.
Hmm. Well said… and interesting, given that I made no mention of climate change, that it saw fit to bring in that dimension.
Returning to The Economist article quoted above for the final word:
…Technology can also provide ways to cope with demographic change, from telemedicine to the increased use of robots in service industries. Such innovations can undoubtedly ease the difficulties presented by an ageing society, whether by directly contributing to the care of the elderly or by automating the roles of young workers. But the shrinking number of innovative young thinkers will, ironically, reduce the number of such valuable new ideas.
A shortfall in human innovation may also be less damaging if offset by new ideas conceived by artificial intelligence. There seems no doubt that machines will soon be working out how to make incremental improvements in existing processes—indeed, in some spheres, they already are. Whether machines will ever learn how to generate disruptive new ideas, however, remains a matter of debate…
[1]First, a point of clarification. The quoted ChatBox material was just that: actual, verbatim, unedited responses from the app. I didn’t ghostwrite them, contrary to some reader feedback suggesting otherwise. (That feedback hints at something vaguely disturbing about my writing style, or lack thereof; pretty sure I don’t want to go there…)
[2]Teachers might well ask: what about the stupefying task of grading/evaluating the individual student outputs? In the imagined future here, teachers would solve that problem of tomorrow using tomorrow’s tools; they wouldn’t be limited to the tools of today. They would presumably enjoy commensurate AI/IT assistance in monitoring the students’ work – for example, extensions of the kinds of tools used currently and under development to enable flipped learning. Roles and expectations would be different; teachers will likely be less needed as the-sage-on-the-stage and more the-guide-on-the-side.