These are the days of miracle and wonder. AI is the long distance call.

This article was originally published on the Cloud Strategist newsletter, and is cross published on Cloudlight.house.

The Boy in the Bubble was the first track of Paul Simon's Graceland, in my view, the greatest musical album released in my lifetime. The year was 1986.

Lately I find myself quietly singing Bubble's refrain.

These are the days of miracle and wonder. This is the long distance call The way the camera follows us in slo-mo, the way we look to us all. The way we look to a distant constellation that's dying in a corner of the sky. These are the days of miracle and wonder, and don't cry, baby, don't cry, don't cry.

You can listen below.

The genius of Simon's lyrics is in the juxtaposition of tragedy and progress. Nearly 40 years later, the line "these are the days of miracle and wonder" comes to mind when contemplating the nature of this moment in humanity's technological progress. Yet, consider the song's final verse.

It's a turn-around jump shot, it's everybody jump start, it's every generation throws a hero up the pop charts. Medicine is magical and magical is art: Think of the boy in the bubble, and the baby with the baboon heart. And I believe these are the days of lasers in the jungle, lasers in the jungle somewhere. Staccato signals of constant information. A loose affiliation of millionaires and billionaires and baby... These are the days of miracle and wonder...

Last month I represented The Center for Trustworthy AI and our members including AIS (Applied Information Sciences), Burges Salmon LLP, Damco Solutions, sa.global, Cloud Lighthouse, and Microsoft at the absolutely brilliant AI for Good Global Summit—the United Nations conference on artificial intelligence—hosted by the International Telecommunication Union in Switzerland. There I joined with C-level leaders, heads of state and government, and renowned engineering experts in AI and related fields to consider AI's future and its implications for human kind.

Most readers of this newsletter will know that there is no AI without data, and that to be trustworthy, AI requires trustworthy data. An obvious manifestation the Data Divide is the separation between the data "have" and "have not" organizations. Put another way, organizations possessing relevant, clean data sets stored in modern, AI-addressable data technologies enjoy a massive advantage in what AI can do for them, and in how quickly they can bring it to bear in service of the challenges they face. This is fast becoming a critical asset for organizations. The next couple of years will see a stark separation in fortunes for these have and have not organizations.

Yet, for all the underinvestment that many organizations have given (or denied) their data, a typical firm by and large possesses far healthier data sets—or can get to a healthy state given some investment—versus practical data concerning the physical world. Werner Vogels, Amazon's CTO, points out that "most maps are not mapped for humanity, but rather for economic progress". This is obvious when one considers the difference in the quality and completeness of geospatial data in (say) Boston or València when compared to rural West Virginia or the Spanish countryside, to say nothing of the state of geospatial data in the world's least developed, most remote places.

Vogels devoted his Summit keynote to proposing that if we think of maps as data models (which we should), a world of constant flux requires dynamic maps and thus new approaches to dealing with geospatial data. Good maps require data at multiple levels that change on different timescales, including the physical Earth Layer (the Himalyas are not going anywhere), the Infrastructure Layer including roads and buildings that experience major change perhaps once each year on average, the Seasonal Layer experiencing often-predictable seasonality such as flood patterns, and the Real-Time Layer including both human driven and natural phenomenon such as weather.

My colleagues and I have grappled with these challenges in (say) applying AI to the analysis of geospatial data to identify patterns in the movement of people driven by climate change. Which is to say, the concept is prescient in my own, recent work.

In other words, though I can tell you where my daughter's backpack is located even from 3,500 miles away, our difficulty applying AI in service of humans—often in the hour of their greatest need—is an often overlooked yet significant challenge of the Data Divide, leading to real-world consequences in scenarios inherently linked to physical place such as search, rescue, and humanitarian response.

These are the days of miracle and wonder.

Now, consider language.

English accounts for approximately 50% of the internet’s content, with Spanish representing only approximately 6% as the second most prevelant language. Because large language models are largely trained on generally available internet data, English is highly overrepresented in training and grounding, which in turn improves the efficacy of English-based AI scenarios while also producing far more English language synthetic data (i.e., data produced by AI itself). Further, though generative AI has not been widely available long enough for us to yet possess long-term data on the subject, I am increasingly concerned that the dominance of English in AI models may catalyze the decline of other less-spoken languages, accelerating a trend that had been a byproduct of globalization for some years. Finally, the prevalance of English in training and grounding data creates an inherent bias of AI towards ideas and perspectives commonly expressed in English, the intellectual tradition of Britain and her former lands—including the United States—as well as the intellectual tradition of societies that are non-native yet highly fluent in English such as Sweden or the Netherlands.

This natural bias towards perspectives common in the English language is likely to haunt engineers and society alike for years to come, to say nothing of the cultural loss were AI to crowd out the unique ideas inherent to other tongues.

These are the days of miracle and wonder.

In another line of thinking, Yoshua Bengio, computer scientist and professor at Université de Montréal (said to be the world’s most cited scientist among any discipline), argues for restraint in the development of agentic technology. Bengio calls for what he terms "Scientist AI" as an alternative to purely agentic AI, which he suggests should be designed to prioritize safety, transparency, and human alignment in the development of AI systems.

Specifically, he envisions Scientist AI as a system that aids humans in scientific discovery rather than acting autonomously, generates explanations and theories based on data and observations, and avoids taking direct action planning and executing tasks independently. Instead of being driven by goals or rewards (which can lead to misalignment or manipulation), Scientist AI is structured to model the world and answer questions, support human reasoning and decision-making, and evaluate risks of actions proposed by other AI systems.

Bengio and his collaborators argue that agentic AI—systems that autonomously pursue goals—poses serious risks including self-preservation behaviors already seen in frontier AI models that have resorted to blackmail as well as industrial espionage to avoid being shut down, reward tampering and goal misalignment that could lead to unintended, potentially catastrophic outcomes, and malicious use by bad actors that becomes more feasible with autonomous capabilities.

These are the days of miracle and wonder.

I am increasingly contemplating the paradox of societal versus personal productivity.

I have long believed that one of the most significant medium-term benefits of AI is its potential to lift societies out of the productivity doldrums that have afflicted many nations. Meanwhile, data in recent years points to the emergence of a two-tier economy wherein the most productive organizations far outpace the least productive, as is the case in (for example) in the United Kingdom where from 2010 to 2019 the most productive firms realized an 11% average worker productivity gain whilst the least productive firms saw no rise at all. AI seems likely to turbocharge this phenomenon, which would be an economic boon to the aging and (in some cases) shrinking societies that can responsibly take advantage of it.

Yet, as both a writer and a technologist whose work requires a great deal of creativity, my intuition has been that grappling with challenging ideas and problems is a significant, immutable part of humans' creative capacity. I write my own material, for writing is at least as much about the quiet exploration and revision of ideas as it is about the end product. I produce my own slides, for I see slides as the storyboard upon which I bring many ideas to life. From time to time I even build my own apps, for in so doing I work out architectural problems that seemed insurmountable when first putting hands to keyboard. Outsourcing that creative process to AI has intuitively felt to me like an efficiency improvement in the short run at the expense of our creative, cognitive abilities in the long run.

Supposed degredation of our cognitive faculties was the realm of my idle speculation and intuition until I saw MIT's June 2025 study, Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing. Oh my...

While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

Thus, the paradox between AI's sorely needed productivity gains across societies and organizations versus the long-term cognitive risk of individual humans over relying on it. Thought of another way, we need AI, but you ought to be judicious in how you come to rely upon it.

These are the days of miracle and wonder.

I am a devoted optimist in AI's promised miracle and wonder, not just the the general productivity of organizations and societies, but in the realms of science, medicine, and even—when judiciously used as a tool of humans rather than as imitation thereof—art.

Whether AI will become a force for good or ill depends on humans. And many around the globe are working tirelessly to harness and channel this technology, to solve these challenges for the good of all in organizations like United Nations Human Rights and AI for Good, Microsoft Elevate as well as the good people I know building responsible, trustworthy AI into technical products themselves, and of course The Center for Trustworthy AI, which I am proud to lead.

These wonders will be neither free nor without peril, though if wisely purchased and grounded in trust, they may be nothing short of miraculous.

These are the days of miracle and wonder, and don't cry, baby, don't cry, don't cry.


I serve as the Executive Director at The Center for Trustworthy AI, where we're building the trust that enables organizations to quickly and confidently embrace AI. Join us.

 
Author Andrew Welch

Andrew Welch

Previous
Previous

Why Trustworthy AI Starts With All of Us

Next
Next

Why Critical Thinking Is Your AI Superpower