Amy Zalman

Summary

Amy Zalman is an internationally recognized futurist, author, and educator who advises government and business leaders on anticipating and navigating future change.

Current Roles:

  • Advisory Specialist Leader, Defense, Security and Justice at Deloitte: She helps defense and civilian clients develop strategies to thrive in the evolving global information environment.
  • Founder and CEO of Prescient LLC: A foresight consultancy that assists Fortune 500 companies, governments, and non-profit organizations in preparing for the future.
  • Founder and Director of the Foresight Sandbox: An executive education program providing strategic foresight training.
  • Part-time Professor of Strategic Foresight at Georgetown University.

Source: Gemini

OnAir Post: Amy Zalman

About

IN HER WORDS

Strategist, Futurist, Speaker

Amy Zalman helps leaders & organizations anticipate and plan transformational change

“Although I have had some very different professional roles in my career, I have heard in each a common refrain.  Everywhere I have worked, people have felt that technology and the societies in which we live, work, fight and play are changing more quickly than the institutions they are working in.

That was true in 2005, when I founded Oryx Communications, a boutique consultancy that created communications products for US defense clients. The 9/11 attacks had shocked our clients and they were surprised by the ways the world was changing — technologically, culturally and geopolitically.

Later, as the Chair of Information Integration at the National War College, it was my job to introduce new ways of understanding “information” to future senior leaders.

At the time, it seemed a little crazy to view information as simultaneously tangible (like cables, and code) and ephemeral, located in the invisible, but crucial, social ether, where people communicate meaning to each other. Today, in the wake of election meddling and fake news, it doesn’t seem crazy to see how complex information is.

In 2014, I became the CEO and President of the World Future Society, which was the world’s first and largest membership organization for futurists when it was founded in the 1960s.  Like many organizations of its age, it was in financial trouble and losing members. I led its transformation from a publishing model to a modern membership ecosystem, and left it with a positive bank balance and positioned for renewed global impact.

In 2017, I decided to gather the insights I had learned along the way and founded Prescient (called the Strategic Narrative Institute, for its first  year). Prescient is a foresight firm, and we provide executive education, strategic retreats and other services to firms seeking to transform in the face of uncertainty and dramatic change.

If you are also interested in the challenging, troubling, exciting ways that world is in flux, I’d love to hear from you.”

Source: Website

Web Links

ITDF Essay, April 2025

We Must Have the Courage to Establish Human Values in Code, Ethical Precepts, Policy and Regulation’

Source: ITDF Webpage

“Because the current wealth and income gap is dramatic and widening, I do not believe it is possible to generalize a common human experience in response to AI advances in the next 10 years. Those with wealth, health, education, other versions of privilege and the ability to sidestep the grossest effects of technological unemployment, surveillance and algorithmic bias, may feel they are enjoying a beneficial integration with algorithm-driven technology. This sense of benefit could include their ability to take advantage of tools and insights to extend health and longevity, innovate and create, find efficiencies in daily life and feel that technology is a force for advancement and good.

“For those who have limited or no access to the benefits of AI (or even good broadband), or who are unable to sidestep potential technological unemployment or surveillance or are members of groups more likely to be objects of algorithmic bias, life as a human may be incrementally to substantially worse. These are generalizations. A good education has not saved any of us from the corrosive effects of widespread mis- and disinformation, and we can all be vulnerable to bad actors empowered with AI tools and methods.

We need to have the courage to establish human values in code, ethical precepts, policy and regulation. One of the most pernicious losses already is the idea that we actually do have influence over how we develop AI capabilities. I hear a sense of loss of control in conversations around me almost daily, the idea and the fear (and a bit of excitement?) that AI might overwhelm us, that ‘it’ is coming for us – whether to replace us or to help us – and that its force is inevitable. AI isn’t a tidal wave or force of nature beyond our control; it’s a tool that we can direct to perform in particular ways.

“On the flip side, living life at a distance from fast-paced AI development may also come to be seen as having benefits. At the least, people living outside the grid of algorithmic logic will escape the discombobulation that comes with having to organize one’s own needs and rhythms around those of a rigidly rule-bound machine. Think of the way that industrialization and mass production required that former rhythms of agrarian life be reformulated to accommodate the needs of a factory, from working during precise and fixed numbers of hours, to performing repetitive, piecemeal work, to new forms of supervision. One result was a romantic nostalgia for pastoral life.

“As AI reshapes society, it seems plausible that we will replicate that habit of the early industrial age and begin to romanticize those who have been left behind by AI as earlier, simpler, more grounded and more human version of us. It will be tempting to indulge in this kind of nostalgia – it lets us enjoy our AI-enabled privileges while pretending to be critical. But even better will be to be curious about our elegiac feelings and willing to use them as a pathway to discovering what we believe is our human essence in the age of AI.

“Then, we need to have the courage to establish those human values in code, ethical precepts, policy and regulation. One of the most pernicious losses already is the idea that we actually do have influence over how we develop AI capabilities. I hear a sense of loss of control in conversations around me almost daily, the idea and the fear (and a bit of excitement?) that AI might overwhelm us, that ‘it’ is coming for us – whether to replace us or to help us – and that its force is inevitable.

“AI isn’t a tidal wave or force of nature beyond our control, it’s a tool that we can direct to perform in particular ways.”


This essay was written in January 2025 in reply to the question: Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’? This and nearly 200 additional essay responses are included in the 2025 report Being Human in 2035.

Skip to toolbar