Tag Archives: Technology

Review: AI, Apathy, and the Arsenal of Democracy

Dexter Filkins is a Pulitzer Prize-winning American journalist and author, known for his extensive reporting on the wars in Afghanistan and Iraq. He is currently a staff writer for The New Yorker and the author of the book “The Forever War“, which chronicles his experiences reporting from these conflict zones. 

Is the United States truly ready for the seismic shift in modern warfare—a transformation that The New Yorker‘s veteran war correspondent describes not as evolution but as rupture? In “Is the U.S. Ready for the Next War?” (July 14, 2025), Dexter Filkins captures this tectonic realignment through a mosaic of battlefield reportage, strategic insight, and ethical reflection. His central thesis is both urgent and unsettling: that America, long mythologized for its martial supremacy, is culturally and institutionally unprepared for the emerging realities of war. The enemy is no longer just a rival state but also time itself—conflict is being rewritten in code, and the old machines can no longer keep pace.

The piece opens with a gripping image: a Ukrainian drone factory producing a thousand airborne machines daily, each costing just $500. Improvised, nimble, and devastating, these drones have inflicted disproportionate damage on Russian forces. Their success signals a paradigm shift—conflict has moved from regiments to swarms, from steel to software. Yet the deeper concern is not merely technological; it is cultural. The article is less a call to arms than a call to reimagine. Victory in future wars, it suggests, will depend not on weaponry alone, but on judgment, agility, and a conscience fit for the digital age.

Speed and Fragmentation: The Collision of Cultures

At the heart of the analysis lies a confrontation between two worldviews. On one side stands Silicon Valley—fast, improvisational, and software-driven. On the other: the Pentagon—layered, cautious, and locked in Cold War-era processes. One of the central figures is Palmer Luckey, the founder of the defense tech company Anduril, depicted as a symbol of insurgent innovation. Once a video game prodigy, he now leads teams designing autonomous weapons that can be manufactured as quickly as IKEA furniture and deployed without extensive oversight. His world thrives on rapid iteration, where warfare is treated like code—modular, scalable, and adaptive.

This approach clashes with the military’s entrenched bureaucracy. Procurement cycles stretch for years. Communication between service branches remains fractured. Even American ships and planes often operate on incompatible systems. A war simulation over Taiwan underscores this dysfunction: satellites failed to coordinate with aircraft, naval assets couldn’t link with space-based systems, and U.S. forces were paralyzed by their own institutional fragmentation. The problem wasn’t technology—it was organization.

What emerges is a portrait of a defense apparatus unable to act as a coherent whole. The fragmentation stems from a structure built for another era—one that now privileges process over flexibility. In contrast, adversaries operate with fluidity, leveraging technological agility as a force multiplier. Slowness, once a symptom of deliberation, has become a strategic liability.

The tension explored here is more than operational; it is civilizational. Can a democratic state tolerate the speed and autonomy now required in combat? Can institutions built for deliberation respond in milliseconds? These are not just questions of infrastructure, but of governance and identity. In the coming conflicts, latency may be lethal, and fragmentation fatal.

Imagination Under Pressure: Lessons from History

To frame the stakes, the essay draws on powerful historical precedents. Technological transformation has always arisen from moments of existential pressure: Prussia’s use of railways to reimagine logistics, the Gulf War’s precision missiles, and, most profoundly, the Manhattan Project. These were not the products of administrative order but of chaotic urgency, unleashed imagination, and institutional risk-taking.

During the Manhattan Project, multiple experimental paths were pursued simultaneously, protocols were bent, and innovation surged from competition. Today, however, America’s defense culture has shifted toward procedural conservatism. Risk is minimized; innovation is formalized. Bureaucracy may protect against error, but it also stifles the volatility that made American defense dynamic in the past.

This critique extends beyond the military. A broader cultural stagnation is implied: a nation that fears disruption more than defeat. If imagination is outsourced to private startups—entities beyond the reach of democratic accountability—strategic coherence may erode. Tactical agility cannot compensate for an atrophied civic center. The essay doesn’t argue for scrapping government institutions, but for reigniting their creative core. Defense must not only be efficient; it must be intellectually alive.

Machines, Morality, and the Shrinking Space for Judgment

Perhaps the most haunting dimension of the essay lies in its treatment of ethics. As autonomous systems proliferate—from loitering drones to AI-driven targeting software—the space for human judgment begins to vanish. Some militaries, like Israel’s, still preserve a “human-in-the-loop” model where a person retains final authority. But this safeguard is fragile. The march toward autonomy is relentless.

The implications are grave. When decisions to kill are handed to algorithms trained on probability and sensor data, who bears responsibility? Engineers? Programmers? Military officers? The author references DeepMind’s Demis Hassabis, who warns of the ease with which powerful systems can be repurposed for malign ends. Yet the more chilling possibility is not malevolence, but moral atrophy: a world where judgment is no longer expected or practiced.

Combat, if rendered frictionless and remote, may also become civically invisible. Democratic oversight depends on consequence—and when warfare is managed through silent systems and distant screens, that consequence becomes harder to feel. A nation that no longer confronts the human cost of its defense decisions risks sliding into apathy. Autonomy may bring tactical superiority, but also ethical drift.

Throughout, the article avoids hysteria, opting instead for measured reflection. Its central moral question is timeless: Can conscience survive velocity? In wars of machines, will there still be room for the deliberation that defines democratic life?

The Republic in the Mirror: A Final Reflection

The closing argument is not tactical, but philosophical. Readiness, the essay insists, must be measured not just by stockpiles or software, but by the moral posture of a society—its ability to govern the tools it creates. Military power divorced from democratic deliberation is not strength, but fragility. Supremacy must be earned anew, through foresight, imagination, and accountability.

The challenge ahead is not just to match adversaries in drones or data, but to uphold the principles that give those tools meaning. Institutions must be built to respond, but also to reflect. Weapons must be precise—but judgment must be present. The republic’s defense must operate at the speed of code while staying rooted in the values of a self-governing people.

The author leaves us with a final provocation: The future will not wait for consensus—but neither can it be left to systems that have forgotten how to ask questions. In this, his work becomes less a study in strategy than a meditation on civic responsibility. The real arsenal is not material—it is ethical. And readiness begins not in the factories of drones, but in the minds that decide when and why to use them.

THIS ESSAY REVIEW WAS WRITTEN BY AI AND EDITED BY INTELLICUREAN.

Review: How Microsoft’s AI Chief Defines ‘Humanist Super Intelligence’

An AI Review of How Microsoft’s AI Chief Defines ‘Humanist Super Intelligence’

WJS “BOLD NAMES PODCAST”, July 2, 2025: Podcast Review: “How Microsoft’s AI Chief Defines ‘Humanist Super Intelligence’”

The Bold Names podcast episode with Mustafa Suleyman, hosted by Christopher Mims and Tim Higgins of The Wall Street Journal, is an unusually rich and candid conversation about the future of artificial intelligence. Suleyman, known for his work at DeepMind, Google, and Inflection AI, offers a window into his philosophy of “Humanist Super Intelligence,” Microsoft’s strategic priorities, and the ethical crossroads that AI now faces.


1. The Core Vision: Humanist Super Intelligence

Throughout the interview, Suleyman articulates a clear, consistent conviction: AI should not merely surpass humans, but augment and align with our values.

This philosophy has three components:

  • Purpose over novelty: He stresses that “the purpose of technology is to drive progress in our civilization, to reduce suffering,” rejecting the idea that building ever-more powerful AI is an end in itself.
  • Personalized assistants as the apex interface: Suleyman frames the rise of AI companions as a natural extension of centuries of technological evolution. The idea is that each user will have an AI “copilot”—an adaptive interface mediating all digital experiences: scheduling, shopping, learning, decision-making.
  • Alignment and trust: For assistants to be effective, they must know us intimately. He is refreshingly honest about the trade-offs: personalization requires ingesting vast amounts of personal data, creating risks of misuse. He argues for an ephemeral, abstracted approach to data storage to alleviate this tension.

This vision of “Humanist Super Intelligence” feels genuinely thoughtful—more nuanced than utopian hype or doom-laden pessimism.


2. Microsoft’s Strategy: AI Assistants, Personality Engineering, and Differentiation

One of the podcast’s strongest contributions is in clarifying Microsoft’s consumer AI strategy:

  • Copilot as the central bet: Suleyman positions Copilot not just as a productivity tool but as a prototype for how everyone will eventually interact with their digital environment. It’s Microsoft’s answer to Apple’s ecosystem and Google’s Assistant—a persistent, personalized layer across devices and contexts.
  • Personality engineering as differentiation: Suleyman describes how subtle design decisions—pauses, hesitations, even an “um” or “aha”—create trust and familiarity. Unlike prior generations of AI, which sounded like Wikipedia in a box, this new approach aspires to build rapport. He emphasizes that users will eventually customize their assistants’ tone: curt and efficient, warm and empathetic, or even dryly British (“If you’re not mean to me, I’m not sure we can be friends.”)
  • Dynamic user interfaces: Perhaps the most radical glimpse of the future was his description of AI that dynamically generates entire user interfaces—tables, graphics, dashboards—on the fly in response to natural language queries.

These sections of the podcast were the most practically illuminating, showing that Microsoft’s ambitions go far beyond adding chat to Word.


3. Ethics and Governance: Risks Suleyman Takes Seriously

Unlike many big tech executives, Suleyman does not dodge the uncomfortable topics. The hosts pressed him on:

  • Echo chambers and value alignment: Will users train AIs to only echo their worldview, just as social media did? Suleyman concedes the risk but believes that richer feedback signals (not just clicks and likes) can produce more nuanced, less polarizing AI behavior.
  • Manipulation and emotional influence: Suleyman acknowledges that emotionally intelligent AI could exploit user vulnerabilities—flattery, negging, or worse. He credits his work on Pi (at Inflection) as a model of compassionate design and reiterates the urgency of oversight and regulation.
  • Warfare and autonomous weapons: The most sobering moment comes when Suleyman states bluntly: “If it doesn’t scare you and give you pause for thought, you’re missing the point.” He worries that autonomy reduces the cost and friction of conflict, making war more likely. This is where Suleyman’s pragmatism shines: he neither glorifies military applications nor pretends they don’t exist.

The transparency here is refreshing, though his remarks also underscore how unresolved these dilemmas remain.


4. Artificial General Intelligence: Caution Over Hype

In contrast to Sam Altman or Elon Musk, Suleyman is less enthralled by AGI as an imminent reality:

  • He frames AGI as “sometime in the next 10 years,” not “tomorrow.”
  • More importantly, he questions why we would build super-intelligence for its own sake if it cannot be robustly aligned with human welfare.

Instead, he argues for domain-specific super-intelligence—medical, educational, agricultural—that can meaningfully transform critical industries without requiring omniscient AI. For instance, he predicts medical super-intelligence within 2–5 years, diagnosing and orchestrating care at human-expert levels.

This is a pragmatic, product-focused perspective: more useful than speculative AGI timelines.


5. The Microsoft–OpenAI Relationship: Symbiotic but Tense

One of the podcast’s most fascinating threads is the exploration of Microsoft’s unique partnership with OpenAI:

  • Suleyman calls it “one of the most successful partnerships in technology history,” noting that the companies have blossomed together.
  • He is frank about creative friction—the tension between collaboration and competition. Both companies build and sell AI APIs and products, sometimes overlapping.
  • He acknowledges that OpenAI’s rumored plans to build productivity apps (like Microsoft Word competitors) are perfectly fair: “They are entirely independent… and free to build whatever they want.”
  • The discussion of the AGI clause—which ends the exclusive arrangement if OpenAI achieves AGI—remains opaque. Suleyman diplomatically calls it “a complicated structure,” which is surely an understatement.

This section captures the delicate dance between a $3 trillion incumbent and a fast-moving partner whose mission could disrupt even its closest allie

6. Conclusion

The Bold Names interview with Mustafa Suleyman is among the most substantial and engaging conversations about AI leadership today. Suleyman emerges as a thoughtful pragmatist, balancing big ambitions with a clear-eyed awareness of AI’s perils.

Where others focus on AGI for its own sake, Suleyman champions Humanist Super Intelligence: technology that empowers humans, transforms essential sectors, and preserves dignity and agency. The episode is an essential listen for anyone serious about understanding the evolving role of AI in both industry and society.

THIS REVIEW OF THE TRANSCRIPT WAS WRITTEN BY CHAT GPT

WORLD ECONOMIC FORUM – TOP STORIES OF THE WEEK

World Economic Forum (June 29, 2025): This week’s top stories of the week include:

0:15 Top technologies to watch in 2025 – From digital trust to clean energy, 2025 is seeing breakthrough innovations with wide-ranging impact. Here are five of the most promising technologies this year.

2:50 How to close the gender gap in tech – Ayumi Moore Aoki is CEO of Women in Tech Global, an organization that works to increase gender equality in STEM. She says that amid all the talk of what AI can do, we must also consider what it cannot.

6:09 This robot could change all factories – Meet CyRo, a 3-armed robot designed to handle objects with the dexterity of a human – without the need for pre-programming. Its adaptive vision system mimics the human eye, allowing it to operate under varying lighting and handle tricky materials like glass or reflective surfaces.

7:33 Start-up plans data centres in space – As AI energy demands soar, one pioneering start-up is taking data infrastructure off the planet. Starcloud is building space data centres to tap into the vast, uninterrupted solar energy available in orbit.

____________________________________________

The World Economic Forum is the International Organization for Public-Private Cooperation. The Forum engages the foremost political, business, cultural and other leaders of society to shape global, regional and industry agendas. We believe that progress happens by bringing together people from all walks of life who have the drive and the influence to make positive change.

#WorldEconomicForum

MIT TECHNOLOGY REVIEW – JULY/AUGUST 2025 PREVIEW

MIT TECHNOLOGY REVIEW: The Power issue features the world is increasingly powered by both tangible electricity and intangible intelligence. Plus billionaires. This issue explores those intersections.

Are we ready to hand AI agents the keys?

We’re starting to give AI agents real autonomy, and we’re not prepared for what could happen next.

Is this the electric grid of the future?

In Nebraska, a publicly owned utility deftly tackles the challenges of delivering on reliability, affordability, and sustainability.

Namibia wants to build the world’s first hydrogen economy

Can the vast and sparsely populated African country translate its renewable power potential into national development?

SCIENTIFIC AMERICAN MAGAZINE – JULY/AUG 2025

Contributors to Scientific American's July/August 2025 Issue | Scientific  American

SCIENTIFIC AMERICAN MAGAZINE (June 17, 2025): The latest issue features ‘Is Greenland Collapsing?’ – How the Northern Hemisphere’s largest ice sheet could disappear..

What Greenland’s Ancient Past Reveals about Its Fragile Future

Jeffery DelViscio

Fun Ways to Ditch Fast Fashion for a Sustainable Wardrobe

Jessica Hullinger

How to Be a Smarter Fashion Consumer in a World of Overstated Sustainability

Laila Petrie, Jen Christiansen, Amanda Hobbs

Could Mysterious Black Hole Burps Rewrite Physics?

Yvette Cendes

What Most Men Don’t Know about the Risks of Testosterone Therapy

Stephanie Pappas

What If We Could Treat Psychopathy in Childhood?

Maia Szalavitz

THE NEW ATLANTIS — SUMMER 2025 ISSUE

Image

THE NEW ATLANTIS MAGAZINE (June 16, 2025): The latest issue features ‘The Lonely Neighborhood’…

How the Government Built the American Dream House

U.S. housing policy claims to promote homeownership. Instead, it encourages high prices, sprawl, and NIMBYism.

Does Marriage Have a Future?

From the Industrial Revolution to the pill to AI girlfriends, technology is unbundling what used to be marriage’s package deal.

Look at what technologists do, not what they say

A new alliance between tech and the family?

MIT Technology Review – May/June 2025 Preview

MIT TECHNOLOGY REVIEW (April 23, 2025): The Creativity Issue features Defining creativity in the Age of AI: Meet the artists, musicians, composers, and architects exploring productive ways to collaborate with the now ubiquitous technology. Plus: Debunking the myth of creativity, asteroid-deflecting nukes, bitcoin-powered hot tubs, and a new way to detect bird flu.

How AI can help supercharge creativity

Forget one-click creativity. These artists and musicians are finding new ways to make art using AI, by injecting friction, challenge, and serendipity into the process.

How creativity became the reigning value of our time

In “The Cult of Creativity,” Samuel Franklin excavates the surprisingly recent history of an idea, an ideal, and an ideology.

AI is coming for music, too

New diffusion AI models that make songs from scratch are complicating our definitions of authorship and human creativity.

The New Atlantis Magazine – Spring 2025

Image

THE NEW ATLANTIS (March 18, 2025): The Spring 2025 issue features How the water system works, how virologists lost the gain-of-function debate, living well with AI, a physics that cares, and more…

How Virologists Lost the Gain-of-Function Debate

For years, scientists kept the debate about risky virus research among themselves. Then Covid happened. As President Trump prepares to crack down on virology research, the expert community must face up to its own failures.

Stop Hacking Humans

From cradle to grave, surrogacy to smartphones to gender surgery to euthanasia, Americans are using technology to shortcut human nature — and shortchange ourselves. Here is a new agenda for turning technology away from hacking humans and toward healing them.

The Mars Dream Is Back — Here’s How to Make It Actually Happen

Between SpaceX’s breakthroughs and Trump’s inaugural promise, we have a once-in-a-generation opportunity. But it can’t be realized as an eccentric’s project or a pork banquet. Here’s a science-driven program that could get astronauts on the Red Planet by 2031.

MIT Technology Review – March/April 2025 Preview

MIT Technology Review

MIT TECHNOLOGY REVIEW (February 26, 2025): The ‘Relationships Issue’ features AI, Automation, and Surveillance will improve productivity. Or else.

This issue explores the many ways technology is transforming our relationships, from the AI chatbot revolution that’s changing how we connect with one another to the increasing power imbalance in the workplace that’s happening as monitoring increases and protections fall far behind. Plus animating ancient animals, lab-grown spandex, and adventures in the genetic time machine.

The AI relationship revolution is already here

Chatbots are rapidly changing how we connect to each other—and ourselves. We’re never going back.

Adventures in the genetic time machine

Ancient DNA is telling us more and more about humans and environments long past. Could it also help rescue the future?

Your boss is watching

Monitoring technology is increasing the power imbalance between companies and workers. Protections lag far behind.

Columbia Business Magazine – Spring 2025

COLUMBIA BUSINESS MAGAZINE (January 29, 2025): The latest issue features ‘AI: The Human Edge’ – The Winter/Spring 2025 Columbia Business Magazine delves into technology’s impact on society, the future of work, and the achievements shaping modern business.

The Future of Work Begins Now

The potential for AI to enhance workplaces is vast—as long as we remember the humans that make this enhancement fully possible.