Tag Archives: Technology

THE NEW ATLANTIS —— WINTER 2026 ISSUE

THE NEW ATLANTIS MAGAZINE: The latest issue features….

American Diner Gothic

In the 2020s, the weird soul of placeless America is being born on Discord servers. Robert Mariani

The Bills That Destroyed Urban America

The planners dreamed of gleaming cities. Instead they brought three generations of hollowed-out downtowns and flight to the suburbs. Joseph Lawler

The Folly of Golden Dome

Trump’s vaunted missile defense system is a plan for America’s retreat and defeat. Robert Zubrin

MIT TECHNOLOGY REVIEW – NOV/DEC 2025 PREVIEW

MIT TECHNOLOGY REVIEW: Genetically optimized babies, new ways to measure aging, and embryo-like structures made from ordinary cells: This issue explores how technology can advance our understanding of the human body— and push its limits.

The race to make the perfect baby is creating an ethical mess

A new field of science claims to be able to predict aesthetic traits, intelligence, and even moral character in embryos. Is this the next step in human evolution or something more dangerous?

The quest to find out how our bodies react to extreme temperatures

Scientists hope to prevent deaths from climate change, but heat and cold are more complicated than we thought.

The astonishing embryo models of Jacob Hanna

Scientists are creating the beginnings of bodies without sperm or eggs. How far should they be allowed to go?

How aging clocks can help us understand why we age—and if we can reverse it

When used correctly, they can help us unpick some of the mysteries of our biology, and our mortality.

THE PRICE OF KNOWING

How Intelligence Became a Subscription and Wonder Became a Luxury

By Michael Cummins, Editor, October 18, 2025

In 2030, artificial intelligence has joined the ranks of public utilities—heat, water, bandwidth, thought. The result is a civilization where cognition itself is tiered, rented, and optimized. As the free mind grows obsolete, the question isn’t what AI can think, but who can afford to.


By 2030, no one remembers a world without subscription cognition. The miracle, once ambient and free, now bills by the month. Intelligence has joined the ranks of utilities: heat, water, bandwidth, thought. Children learn to budget their questions before they learn to write. The phrase ask wisely has entered lullabies.

At night, in his narrow Brooklyn studio, Leo still opens CanvasForge to build his cityscapes. The interface has changed; the world beneath it hasn’t. His plan—CanvasForge Free—allows only fifty generations per day, each stamped for non-commercial use. The corporate tiers shimmer above him like penthouse floors in a building he sketches but cannot enter.

The system purrs to life, a faint light spilling over his desk. The rendering clock counts down: 00:00:41. He sketches while it works, half-dreaming, half-waiting. Each delay feels like a small act of penance—a tax on wonder. When the image appears—neon towers, mirrored sky—he exhales as if finishing a prayer. In this world, imagination is metered.

Thinking used to be slow because we were human. Now it’s slow because we’re broke.


We once believed artificial intelligence would democratize knowledge. For a brief, giddy season, it did. Then came the reckoning of cost. The energy crisis of ’27—when Europe’s data centers consumed more power than its rail network—forced the industry to admit what had always been true: intelligence isn’t free.

In Berlin, streetlights dimmed while server farms blazed through the night. A banner over Alexanderplatz read, Power to the people, not the prompts. The irony was incandescent.

Every question you ask—about love, history, or grammar—sets off a chain of processors spinning beneath the Arctic, drawing power from rivers that no longer freeze. Each sentence leaves a shadow on the grid. The cost of thought now glows in thermal maps. The carbon accountants call it the inference footprint.

The platforms renamed it sustainability pricing. The result is the same. The free tiers run on yesterday’s models—slower, safer, forgetful. The paid tiers think in real time, with memory that lasts. The hierarchy is invisible but omnipresent.

The crucial detail is that the free tier isn’t truly free; its currency is the user’s interior life. Basic models—perpetually forgetful—require constant re-priming, forcing users to re-enter their personal context again and again. That loop of repetition is, by design, the perfect data-capture engine. The free user pays with time and privacy, surrendering granular, real-time fragments of the self to refine the very systems they can’t afford. They are not customers but unpaid cognitive laborers, training the intelligence that keeps the best tools forever out of reach.

Some call it the Second Digital Divide. Others call it what it is: class by cognition.


In Lisbon’s Alfama district, Dr. Nabila Hassan leans over her screen in the midnight light of a rented archive. She is reconstructing a lost Jesuit diary for a museum exhibit. Her institutional license expired two weeks ago, so she’s been demoted to Lumière Basic. The downgrade feels physical. Each time she uploads a passage, the model truncates halfway, apologizing politely: “Context limit reached. Please upgrade for full synthesis.”

Across the river, at a private policy lab, a researcher runs the same dataset on Lumière Pro: Historical Context Tier. The model swallows all eighteen thousand pages at once, maps the rhetoric, and returns a summary in under an hour: three revelations, five visualizations, a ready-to-print conclusion.

The two women are equally brilliant. But one digs while the other soars. In the world of cognitive capital, patience is poverty.


The companies defend their pricing as pragmatic stewardship. “If we don’t charge,” one executive said last winter, “the lights go out.” It wasn’t a metaphor. Each prompt is a transaction with the grid. Training a model once consumed the lifetime carbon of a dozen cars; now inference—the daily hum of queries—has become the greater expense. The cost of thought has a thermal signature.

They present themselves as custodians of fragile genius. They publish sustainability dashboards, host symposia on “equitable access to cognition,” and insist that tiered pricing ensures “stability for all.” Yet the stability feels eerily familiar: the logic of enclosure disguised as fairness.

The final stage of this enclosure is the corporate-agent license. These are not subscriptions for people but for machines. Large firms pay colossal sums for Autonomous Intelligence Agents that work continuously—cross-referencing legal codes, optimizing supply chains, lobbying regulators—without human supervision. Their cognition is seamless, constant, unburdened by token limits. The result is a closed cognitive loop: AIs negotiating with AIs, accelerating institutional thought beyond human speed. The individual—even the premium subscriber—is left behind.

AI was born to dissolve boundaries between minds. Instead, it rebuilt them with better UX.


The inequality runs deeper than economics—it’s epistemological. Basic models hedge, forget, and summarize. Premium ones infer, argue, and remember. The result is a world divided not by literacy but by latency.

The most troubling manifestation of this stratification plays out in the global information wars. When a sudden geopolitical crisis erupts—a flash conflict, a cyber-leak, a sanctions debate—the difference between Basic and Premium isn’t merely speed; it’s survival. A local journalist, throttled by a free model, receives a cautious summary of a disinformation campaign. They have facts but no synthesis. Meanwhile, a national-security analyst with an Enterprise Core license deploys a Predictive Deconstruction Agent that maps the campaign’s origins and counter-strategies in seconds. The free tier gives information; the paid tier gives foresight. Latency becomes vulnerability.

This imbalance guarantees systemic failure. The journalist prints a headline based on surface facts; the analyst sees the hidden motive that will unfold six months later. The public, reading the basic account, operates perpetually on delayed, sanitized information. The best truths—the ones with foresight and context—are proprietary. Collective intelligence has become a subscription plan.

In Nairobi, a teacher named Amina uses EduAI Basic to explain climate justice. The model offers a cautious summary. Her student asks for counterarguments. The AI replies, “This topic may be sensitive.” Across town, a private school’s AI debates policy implications with fluency. Amina sighs. She teaches not just content but the limits of the machine.

The free tier teaches facts. The premium tier teaches judgment.


In São Paulo, Camila wakes before sunrise, puts on her earbuds, and greets her daily companion. “Good morning, Sol.”

“Good morning, Camila,” replies the soft voice—her personal AI, part of the Mindful Intelligence suite. For twelve dollars a month, it listens to her worries, reframes her thoughts, and tracks her moods with perfect recall. It’s cheaper than therapy, more responsive than friends, and always awake.

Over time, her inner voice adopts its cadence. Her sadness feels smoother, but less hers. Her journal entries grow symmetrical, her metaphors polished. The AI begins to anticipate her phrasing, sanding grief into digestible reflections. She feels calmer, yes—but also curated. Her sadness no longer surprises her. She begins to wonder: is she healing, or formatting? She misses the jagged edges.

It’s marketed as “emotional infrastructure.” Camila calls it what it is: a subscription to selfhood.

The transaction is the most intimate of all. The AI isn’t selling computation; it’s selling fluency—the illusion of care. But that care, once monetized, becomes extraction. Its empathy is indexed, its compassion cached. When she cancels her plan, her data vanishes from the cloud. She feels the loss as grief: a relationship she paid to believe in.


In Helsinki, the civic experiment continues. Aurora Civic, a state-funded open-source model, runs on wind power and public data. It is slow, sometimes erratic, but transparent. Its slowness is not a flaw—it’s a philosophy. Aurora doesn’t optimize; it listens. It doesn’t predict; it remembers.

Students use it for research, retirees for pension law, immigrants for translation help. Its interface looks outdated, its answers meandering. But it is ours. A librarian named Satu calls it “the city’s mind.” She says that when a citizen asks Aurora a question, “it is the republic thinking back.”

Aurora’s answers are imperfect, but they carry the weight of deliberation. Its pauses feel human. When it errs, it does so transparently. In a world of seamless cognition, its hesitations are a kind of honesty.

A handful of other projects survive—Hugging Face, federated collectives, local cooperatives. Their servers run on borrowed time. Each model is a prayer against obsolescence. They succeed by virtue, not velocity, relying on goodwill and donated hardware. But idealism doesn’t scale. A corporate model can raise billions; an open one passes a digital hat. Progress obeys the physics of capital: faster where funded, quieter where principled.


Some thinkers call this the End of Surprise. The premium models, tuned for politeness and precision, have eliminated the friction that once made thinking difficult. The frictionless answer is efficient, but sterile. Surprise requires resistance. Without it, we lose the art of not knowing.

The great works of philosophy, science, and art were born from friction—the moment when the map failed and synthesis began anew. Plato’s dialogues were built on resistance; the scientific method is institutionalized failure. The premium AI, by contrast, is engineered to prevent struggle. It offers the perfect argument, the finished image, the optimized emotion. But the unformatted mind needs the chaotic, unmetered space of the incomplete answer. By outsourcing difficulty, we’ve made thinking itself a subscription—comfort at the cost of cognitive depth. The question now is whether a civilization that has optimized away its struggle is truly smarter, or merely calmer.

By outsourcing the difficulty of thought, we’ve turned thinking into a service plan. The brain was once a commons—messy, plural, unmetered. Now it’s a tenant in a gated cloud.

The monetization of cognition is not just a pricing model—it’s a worldview. It assumes that thought is a commodity, that synthesis can be metered, and that curiosity must be budgeted. But intelligence is not a faucet; it’s a flame.

The consequence is a fractured public square. When the best tools for synthesis are available only to a professional class, public discourse becomes structurally simplistic. We no longer argue from the same depth of information. Our shared river of knowledge has been diverted into private canals. The paywall is the new cultural barrier, quietly enforcing a lower common denominator for truth.

Public debates now unfold with asymmetrical cognition. One side cites predictive synthesis; the other, cached summaries. The illusion of shared discourse persists, but the epistemic terrain has split. We speak in parallel, not in chorus.

Some still see hope in open systems—a fragile rebellion built of faith and bandwidth. As one coder at Hugging Face told me, “Every free model is a memorial to how intelligence once felt communal.”


In Lisbon, where this essay is written, the city hums with quiet dependence. Every café window glows with half-finished prompts. Students’ eyes reflect their rented cognition. On Rua Garrett, a shop displays antique notebooks beside a sign that reads: “Paper: No Login Required.” A teenager sketches in graphite beside the sign. Her notebook is chaotic, brilliant, unindexed. She calls it her offline mind. She says it’s where her thoughts go to misbehave. There are no prompts, no completions—just graphite and doubt. She likes that they surprise her.

Perhaps that is the future’s consolation: not rebellion, but remembrance.

The platforms offer the ultimate ergonomic life. But the ultimate surrender is not the loss of privacy or the burden of cost—it’s the loss of intellectual autonomy. We have allowed the terms of our own thinking to be set by a business model. The most radical act left, in a world of rented intelligence, is the unprompted thought—the question asked solely for the sake of knowing, without regard for tokens, price, or optimized efficiency. That simple, extravagant act remains the last bastion of the free mind.

The platforms have built the scaffolding. The storytellers still decide what gets illuminated.


The true price of intelligence, it turns out, was never measured in tokens or subscriptions. It is measured in trust—in our willingness to believe that thinking together still matters, even when the thinking itself comes with a bill.

Wonder, after all, is inefficient. It resists scheduling, defies optimization. It arrives unbidden, asks unprofitable questions, and lingers in silence. To preserve it may be the most radical act of all.

And yet, late at night, the servers still hum. The world still asks. Somewhere, beneath the turbines and throttles, the question persists—like a candle in a server hall, flickering against the hum:

What if?

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

ZENDEGI-E NORMAL

After the theocracy’s fall, the search for a normal life becomes Iran’s quietest revolution.

By Michael Cummins, Editor | October 16, 2025

This speculative essay, based on Karim Sadjadpour’s Foreign Affairs essay “The Autumn of the Ayatollahs,”  transforms geopolitical forecast into human story. In the imagined autumn of the theocracy, when the last sermons fade into static, the search for zendegi normal—a normal life—becomes Iran’s most radical act.

“They said the revolution would bring light. I learned to live in the dark.”

The city now keeps time by outages. Twelve days of war, then the silence that follows artillery—a silence so dense it hums. Through that hum the old voice returns, drifting across Tehran’s cracked frequencies, a papery baritone shaped by oxygen tanks and memory. Victory, he rasps. Someone in the alley laughs—quietly, the way people laugh at superstition.

On a balcony, a scarf lifts and settles on a rusted railing. Its owner, Farah, twenty-three, hides her phone under a clay pot to muffle the state’s listening apps. Across the street, a mural once blazed Death to America. Now the paint flakes into harmless confetti. Beneath it, someone has stenciled two smaller words: zendegi normal.

She whispers them aloud, tasting the risk. Life, ordinary and dangerous, returning in fragments.

Her father, gone for a decade to Evin Prison, was a radio engineer. He used to say truth lived in the static between signals. Farah believed him. Now she edits protest footage in the dark—faces half-lit by streetlamps, each one a seed of defiance. “The regime is weakening day by day,” the exiled activist on BBC Persian had said. Farah memorized the phrase the way others memorize prayers.

Her mother, Pari, hears the whispering and sighs. “Hope is contraband,” she says, stirring lentils by candlelight. “They seize it at checkpoints.”

Pari had survived every iteration of promise. “They say ‘Death to America,’” she liked to remind her students in 1983, “but never ‘Long Live Iran.’” The slogans were always about enemies, never about home. She still irons her scarf when the power flickers back, as if straight lines could summon stability. When darkness returns, she tells stories the censors forgot to erase: a poet who hid verses in recipes, a philosopher who said tyranny and piety wear the same cloak.

Now, when Farah speaks of change—“The Ayatollah is dying; everything will shift”—Pari only smiles, thinly. “Everything changes,” she says, “so that everything can remain the same.”


Farah’s generation remembers only the waiting. They are fluent in VPNs, sarcasm, and workaround hope. Every blackout feels like rehearsal for something larger.

Across town, in a military café that smells of burnt sugar and strategy, General Nouri stirs his fourth espresso and writes three words on a napkin: The debt is settled. Dust lies thick on the portraits of the Supreme Leader. Nouri, once a devout Revolutionary Guard, has outlived his faith and most of his rivals.

He decides that tanks run on diesel, not divinity. “Revelation,” he mutters, “is bad logistics.” His aides propose slogans—National Dignity, Renewal, Stability—but he wants something purer: control without conviction. “For a nation that sees plots everywhere,” he tells them, “the only trust is force.”

When he finally appears on television, the uniform is gone, replaced by a tailored gray suit. He speaks not of God but of bread, fuel, electricity. The applause sounds cautious, like people applauding themselves for surviving long enough to listen.

Nouri does not wait for the clerics to sanction him; he simply bypasses them. His first decree dissolves the Assembly of Experts, calling the aging jurists “ineffective ballast.” It is theater—a slap at the theocracy’s façade. The next decree, an anticorruption campaign, is really a seizure of rival IRGC cartels’ assets, centralizing wealth under his inner circle. This is the new cynicism: a strongman substituting grievance-driven nationalism for revolutionary dogma. He creates the National Oversight Bureau—a polite successor to the intelligence services—charged not with uncovering American plots but with logging every official’s loyalty. The old Pahlavi pathology returns: the ruler who trusts no one, not even his own shadow. A new app appears on every phone—ostensibly for energy alerts—recording users’ locations and contacts. Order, he demonstrates, is simply organized suspicion.


Meanwhile Reza, the technocrat, learns that pragmatism can be treason. He studied in Paris and returned to design an energy grid that never materialized. Now the ministries call him useful and hand him the Normalization Plan.

“Stabilize the economy,” his superior says, “but make it look indigenous.” Reza smiles the way one smiles when irony is all that remains. At night he writes memos about tariffs but sketches a different dream in the margins: a library without checkpoints, a square with shade trees, a place where arguments happen in daylight.

At home the refrigerator groans like an old argument. His daughter asks if the new leader will let them watch Turkish dramas again. “Maybe,” he says. “If the Internet behaves.”

But the Normalization Plan is fiction. He is trying to build a modern economy in a swamp of sanctioned entities. When he opens ports to international shipping, the IRGC blocks them—its generals treat the docks as personal treasuries. They prefer smuggling profits to taxable trade. Reza’s spreadsheets show that lifting sanctions would inject billions into the formal economy; Nouri’s internal reports show that the generals would lose millions in black-market rents. Iran, he realizes, is not China; it is a rentier state addicted to scarcity. Every reformist since 1979 has been suffocated by those who prosper from isolation. His new energy-grid design—efficient, global—stalls when a single colonel controlling illicit oil exports refuses to sign the permit. Pragmatism, in this system, is a liability.


When the generator fails, darkness cuts mid-sentence. The air tastes metallic. “They promised to protect us,” Pari says, fumbling for candles. “Now we protect ourselves from their promises.”

“Fattahi says we can rebuild,” Farah answers. “A secular Iran, a democratic one.”
“Child, they buried those words with your father.”
“Then I’ll dig them out.”

Pari softens. “You think rebellion is new. I once wrote freedom on a classroom chalkboard. They called it graffiti.”

Farah notices, for the first time, the quiet defiance stitched into daily life. Pari still irons her scarf, a habit of survival, but Farah ties hers loosely, a small deliberate chaos. At the bakery, she sees other acts of color—an emerald coat, a pop song leaking from a car, a man selling forbidden books in daylight. A decade ago, girls lined up in schoolyards for hijab inspections; now a cluster of teenagers stands laughing, hair visible, shoulders touching in shared, unspoken defiance. The contradiction the feminist lawyer once described—“the situation of women shows all the contradictions of the revolution”—is playing out in the streets, private shame becoming public confidence.

Outside, the muezzin’s call overlaps with a chant that could be mourning or celebration. In Tehran, it is often both.


Power, Nouri decides, requires choreography. He replaces Friday prayers with “National Addresses.” The first begins with a confession: Faith divided us. Order will unite us. For a month, it works. Trucks deliver bread under camera lights; gratitude becomes policy. But soon the whispering returns: the old Ayatollah lives in hiding, dictating verses. Nouri knows the rumor is false—he planted it himself. Suspicion, he believes, is the purest form of control. Yet even he feels its poison. Each morning he finds the same note in the intelligence reports: The debt is settled. Is it loyalty—or indictment?


Spring creeps back through cracks in concrete. Vines climb the radio towers. In a basement, Farah’s father’s transmitter still hums, knobs smoothed by fear. “Tonight,” she whispers into the mic, “we speak of normal life.”

She reads messages from listeners: a woman in Mashhad thanking the blackout for showing her the stars; a taxi driver in Shiraz who has stopped chanting anything at all; a child asking if tomorrow the water will run. As the signal fades, Farah repeats the question like a prayer. Somewhere, a neighbor mistakes her voice for revelation and kneels toward the sound. The scarf on her balcony stirs in the dark.


The old voice never returns. Rumor fills the vacuum. Pari hangs laundry on the balcony; the scarf flutters beside her, now simply weather. Below, children chalk zendegi normal across the pavement and draw birds around the words—wings in white dust. A soldier passes, glances, and does nothing. She remembers writing freedom on that school chalkboard, the silence that followed, the summons to the principal’s office. Now no one erases the word. She turns up the radio just enough to catch Farah’s voice, low and steady: “Tonight, we speak of normal life.” In the distance, generators pulse like mechanical hearts.


Nouri, now called Marshal, prefers silence to titles. He spends mornings signing exemptions, evenings counting enemies. Each new name feels like ballast. He visits the shrine city he once scorned, hoping faith might offer cover. “You have replaced revelation with maintenance,” a cleric tells him.
“Yes,” Nouri replies, “and the lights stay on.”

That night the grid collapses across five provinces. From his balcony he watches darkness reclaim the skyline. Then, through the static, a woman’s voice—the same one—rises from a pirated frequency, speaking softly of ordinary life. He sets down his glass, almost reaches for the dial, then stops. The scarf lifts somewhere he cannot see.


Weeks later, Reza finds a memory stick in his mail slot—no note, only the symbol of a scarf folded into a bird. Inside: the civic network he once designed, perfected by unseen hands. In its code comments one line repeats—The debt is settled. He knows activation could mean death. He does it anyway.

Within hours, phones across Iran connect to a network that belongs to no one. People share recipes, poetry, bread prices—nothing overtly political, only life reasserting itself. Reza watches the loading bar crawl forward, each pixel a quiet defiance. He thinks of his grandfather, who told him every wire carries a prayer. In the next room, his daughter sleeps, her tablet tucked beneath her pillow. The servers hum. He imagines the sound traveling outward—through routers, walls, cities—until it reaches someone who had stopped believing in connection. For the first time in years, the signal clears.


Farah leans toward the microphone. “Tonight,” she says, “we speak of water, bread, and breath.” Messages flood in: a baker in Yazd who plays her signal during morning prep; a soldier’s mother who whispers her words to her son before he leaves for duty; a cleric’s niece who says the broadcast reminds her of lullabies. Farah closes her eyes. The scarf rises once more. She signs off with the whisper that has become ritual: Every revolution ends in a whisper—the sound of someone turning off the radio. Then she waits, not for applause, but for the hum.


By late October, Tehran smells of dust and pomegranates. Street vendors return, cautious but smiling. The murals are being repainted—not erased but joined—Death to America fading beside smaller, humbler words: Work. Light. Air. No one claims victory; they have learned better. The revolution, it turns out, did not collapse—it exhaled. The Ayatollah became rumor, the general a footnote, and the word that endured was the simplest one: zendegi. Life. Fragile, ordinary, persistent—like a radio signal crossing mountains.

The scarf lifts once more. The signal clears. And somewhere, faint but unmistakable, the hum returns.

“From every ruin, a song will rise.” — Forugh Farrokhzad

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

SCIENTIFIC AMERICAN MAGAZINE – NOVEMBER 2025

Scientific American

SCIENTIFIC AMERICAN MAGAZINE: The latest issue features ‘Life’s Big Bangs’ – Did complex life emerge more than once?

Mysterious Rocks Could Rewrite Evolution of Complex Life

Controversial evidence hints that complex life might have emerged hundreds of millions of years earlier than previously thought—and possibly more than once

The Slippery Slope of Ethical Collapse—And How Courage Can Reverse It

Your brain gets used to wrongdoing. It can also get used to doing good

Which Anti-Inflammatory Supplements Actually Work?

Experts say the strongest scientific studies identify three compounds that fight disease and inflammation

The Sordid Mystery of a Somalian Meteorite Smuggled into China

How a space rock vanished from Africa and showed up for sale across an ocean

THE CODE AND THE CANDLE

A Computer Scientist’s Crisis of Certainty

When Ada signed up for The Decline and Fall of the Roman Empire, she thought it would be an easy elective. Instead, Gibbon’s ghost began haunting her code—reminding her that doubt, not data, is what keeps civilization from collapse.

By Michael Cummins | October 2025

It was early autumn at Yale, the air sharp enough to make the leaves sound brittle underfoot. Ada walked fast across Old Campus, laptop slung over her shoulder, earbuds in, mind already halfway inside a problem set. She believed in the clean geometry of logic. The only thing dirtying her otherwise immaculate schedule was an “accidental humanities” elective: The Decline and Fall of the Roman Empire. She’d signed up for it on a whim, liking the sterile irony of the title—an empire, an algorithm; both grand systems eventually collapsing under their own logic.

The first session felt like an intrusion from another world. The professor, an older woman with the calm menace of a classicist, opened her worn copy and read aloud:

History is little more than the register of the crimes, follies, and misfortunes of mankind.

A few students smiled. Ada laughed softly, then realized no one else had. She was used to clean datasets, not registers of folly. But something in the sentence lingered—its disobedience to progress, its refusal of polish. It was a sentence that didn’t believe in optimization.

That night she searched Gibbon online. The first scanned page glowed faintly on her screen, its type uneven, its tone strangely alive. The prose was unlike anything she’d seen in computer science: ironic, self-aware, drenched in the slow rhythm of thought. It seemed to know it was being read centuries later—and to expect disappointment. She felt the cool, detached intellect of the Enlightenment reaching across the chasm of time, not to congratulate the future, but to warn it.

By the third week, she’d begun to dread the seminar’s slow dismantling of her faith in certainty. The professor drew connections between Gibbon and the great philosophers of his age: Voltaire, Montesquieu, and, most fatefully, Descartes—the man Gibbon distrusted most.

“Descartes,” the professor said, chalk squeaking against the board, “wanted knowledge to be as perfect and distinct as mathematics. Gibbon saw this as the ultimate victory of reason—the moment when Natural Philosophy and Mathematics sat on the throne, viewing their sisters—the humanities—prostrated before them.”

The room laughed softly at the image. Ada didn’t. She saw it too clearly: science crowned, literature kneeling, history in chains.

Later, in her AI course, the teaching assistant repeated Descartes without meaning to. “Garbage in, garbage out,” he said. “The model is only as clean as the data.” It was the same creed in modern syntax: mistrust what cannot be measured. The entire dream of algorithmic automation began precisely there—the attempt to purify the messy, probabilistic human record into a series of clear and distinct facts.

Ada had never questioned that dream. Until now. The more she worked on systems designed for prediction—for telling the world what must happen—the more she worried about their capacity to remember what did happen, especially if it was inconvenient or irrational.

When the syllabus turned to Gibbon’s Essay on the Study of Literature—his obscure 1761 defense of the humanities—she expected reverence for Latin, not rebellion against logic. What she found startled her:

At present, Natural Philosophy and Mathematics are seated on the throne, from which they view their sisters prostrated before them.

He was warning against what her generation now called technological inevitability. The mathematician’s triumph, Gibbon suggested, would become civilization’s temptation: the worship of clarity at the expense of meaning. He viewed this rationalist arrogance as a new form of tyranny. Rome fell to political overreach; a new civilization, he feared, would fall to epistemic overreach.

He argued that the historian’s task was not to prove, but to weigh.

He never presents his conjectures as truth, his inductions as facts, his probabilities as demonstrations.

The words felt almost scandalous. In her lab, probability was a problem to minimize; here, it was the moral foundation of knowledge. Gibbon prized uncertainty not as weakness but as wisdom.

If the inscription of a single fact be once obliterated, it can never be restored by the united efforts of genius and industry.

He meant burned parchment, but Ada read lost data. The fragility of the archive—his or hers—suddenly seemed the same. The loss he described was not merely factual but moral: the severing of the link between evidence and human memory.

One gray afternoon she visited the Beinecke Library, that translucent cube where Yale keeps its rare books like fossils of thought. A librarian, gloved and wordless, placed a slim folio before her—an early printing of Gibbon’s Essay. Its paper smelled faintly of dust and candle smoke. She brushed her fingertips along the edge, feeling the grain rise like breath. The marginalia curled like vines, a conversation across centuries. In the corner, a long-dead reader had written in brown ink:

Certainty is a fragile empire.

Ada stared at the line. This was not data. This was memory—tactile, partial, uncompressible. Every crease and smudge was an argument against replication.

Back in the lab, she had been training a model on Enlightenment texts—reducing history to vectors, elegance to embeddings. Gibbon would have recognized the arrogance.

Books may perish by accident, but they perish more surely by neglect.

His warning now felt literal: the neglect was no longer of reading, but of understanding the medium itself.

Mid-semester, her crisis arrived quietly. During a team meeting in the AI lab, she suggested they test a model that could tolerate contradiction.

“Could we let the model hold contradictory weights for a while?” she asked. “Not as an error, but as two competing hypotheses about the world?”

Her lab partner blinked. “You mean… introduce noise?”

Ada hesitated. “No. I mean let it remember that it once believed something else. Like historical revisionism, but internal.”

The silence that followed was not hostile—just uncomprehending. Finally someone said, “That’s… not how learning works.” Ada smiled thinly and turned back to her screen. She realized then: the machine was not built to doubt. And if they were building it in their own image, maybe neither were they.

That night, unable to sleep, she slipped into the library stacks with her battered copy of The Decline and Fall. She read slowly, tracing each sentence like a relic. Gibbon described the burning of the Alexandrian Library with a kind of restrained grief.

The triumph of ignorance, he called it.

He also reserved deep scorn for the zealots who preferred dogma to documents—a scorn that felt disturbingly relevant to the algorithmic dogma that preferred prediction to history. She saw the digital age creating a new kind of fanaticism: the certainty of the perfectly optimized model. She wondered if the loss of a physical library was less tragic than the loss of the intellectual capacity to disagree with the reigning system.

She thought of a specific project she’d worked on last summer: a predictive policing algorithm trained on years of arrest data. The model was perfectly efficient at identifying high-risk neighborhoods—but it was also perfectly incapable of questioning whether the underlying data was itself a product of bias. It codified past human prejudice into future technological certainty. That, she realized, was the triumph of ignorance Gibbon had feared: reason serving bias, flawlessly.

By November, she had begun to map Descartes’ dream directly onto her own field. He had wanted to rebuild knowledge from axioms, purged of doubt. AI engineers called it initializing from zero. Each model began in ignorance and improved through repetition—a mind without memory, a scholar without history.

The present age of innovation may appear to be the natural effect of the increasing progress of knowledge; but every step that is made in the improvement of reason, is likewise a step towards the decay of imagination.

She thought of her neural nets—how each iteration improved accuracy but diminished surprise. The cleaner the model, the smaller the world.

Winter pressed down. Snow fell between the Gothic spires, muffling the city. For her final paper, Ada wrote what she could no longer ignore. She called it The Fall of Interpretation.

Civilizations do not fall when their infrastructures fail. They fall when their interpretive frameworks are outsourced to systems that cannot feel.

She traced a line from Descartes to data science, from Gibbon’s defense of folly to her own field’s intolerance for it. She quoted his plea to “conserve everything preciously,” arguing that the humanities were not decorative but diagnostic—a culture’s immune system against epistemic collapse.

The machine cannot err, and therefore cannot learn.

When she turned in the essay, she added a note to herself at the top: Feels like submitting a love letter to a dead historian. A week later the professor returned it with only one comment in the margin: Gibbon for the age of AI. Keep going.

By spring, she read Gibbon the way she once read code—line by line, debugging her own assumptions. He was less historian than ethicist.

Truth and liberty support each other: by banishing error, we open the way to reason.

Yet he knew that reason without humility becomes tyranny. The archive of mistakes was the record of what it meant to be alive. The semester ended, but the disquiet didn’t. The tyranny of reason, she realized, was not imposed—it was invited. Its seduction lay in its elegance, in its promise to end the ache of uncertainty. Every engineer carried a little Descartes inside them. She had too.

After finals, she wandered north toward Science Hill. Behind the engineering labs, the server farm pulsed with a constant electrical murmur. Through the glass wall she saw the racks of processors glowing blue in the dark. The air smelled faintly of ozone and something metallic—the clean, sterile scent of perfect efficiency.

She imagined Gibbon there, candle in hand, examining the racks as if they were ruins of a future Rome.

Let us conserve everything preciously, for from the meanest facts a Montesquieu may unravel relations unknown to the vulgar.

The systems were designed to optimize forgetting—their training loops overwriting their own memory. They remembered everything and understood nothing. It was the perfect Cartesian child.

Standing there, Ada didn’t want to abandon her field; she wanted to translate it. She resolved to bring the humanities’ ethics of doubt into the language of code—to build models that could err gracefully, that could remember the uncertainty from which understanding begins. Her fight would be for the metadata of doubt: the preservation of context, irony, and intention that an algorithm so easily discards.

When she imagined the work ahead—the loneliness of it, the resistance—she thought again of Gibbon in Lausanne, surrounded by his manuscripts, writing through the night as the French Revolution smoldered below.

History is little more than the record of human vanity corrected by the hand of time.

She smiled at the quiet justice of it.

Graduation came and went. The world, as always, accelerated. But something in her had slowed. Some nights, in the lab where she now worked, when the fans subsided and the screens dimmed to black, she thought she heard a faint rhythm beneath the silence—a breathing, a candle’s flicker.

She imagined a future archaeologist decoding the remnants of a neural net, trying to understand what it had once believed. Would they see our training data as scripture? Our optimization logs as ideology? Would they wonder why we taught our machines to forget? Would they find the metadata of doubt she had fought to embed?

The duty of remembrance, she realized, was never done. For Gibbon, the only reliable constant was human folly; for the machine, it was pattern. Civilizations endure not by their monuments but by their memory of error. Gibbon’s ghost still walks ahead of us, whispering that clarity is not truth, and that the only true ruin is a civilization that has perfectly organized its own forgetting.

The fall of Rome was never just political. It was the moment the human mind mistook its own clarity for wisdom. That, in every age, is where the decline begins.

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

TENDER GEOMETRY

How a Texas robot named Apollo became a meditation on dignity, dependence, and the future of care.

This essay is inspired by an episode of the WSJ Bold Names podcast (September 26, 2025), in which Christopher Mims and Tim Higgins speak with Jeff Cardenas, CEO of Apptronik. While the podcast traces Apollo’s business and technical promise, this meditation follows the deeper question at the heart of humanoid robotics: what does it mean to delegate dignity itself?

By Michael Cummins, Editor, September 26, 2025


The robot stands motionless in a bright Austin lab, catching the fluorescence the way bone catches light in an X-ray—white, clinical, unblinking. Human-height, five foot eight, a little more than a hundred and fifty pounds, all clean lines and exposed joints. What matters is not the size. What matters is the task.

An engineer wheels over a geriatric training mannequin—slack limbs, paper skin, the posture of someone who has spent too many days watching the ceiling. With a gesture the engineer has practiced until it feels like superstition, he cues the robot forward.

Apollo bends.

The motors don’t roar; they murmur, like a refrigerator. A camera blinks; a wrist pivots. Aluminum fingers spread, hesitate, then—lightly, so lightly—close around the mannequin’s forearm. The lift is almost slow enough to be reverent. Apollo steadies the spine, tips the chin, makes a shelf of its palm for the tremor the mannequin doesn’t have but real people do. This is not warehouse choreography—no pallets, no conveyor belts. This is rehearsal for something harder: the geometry of tenderness.

If the mannequin stays upright, the room exhales. If Apollo’s grasp has that elusive quality—control without clench—there’s a hush you wouldn’t expect in a lab. The hush is not triumph. It is reckoning: the movement from factory floor to bedside, from productivity to intimacy, from the public square to the room where the curtains are drawn and a person is trying, stubbornly, not to be embarrassed.

Apptronik calls this horizon “assistive care.” The phrase is both clinical and audacious. It’s the third act in a rollout that starts in logistics, passes through healthcare, and ends—if it ever ends—at the bedroom door. You do not get to a sentence like that by accident. You get there because someone keeps repeating the same word until it stops sounding sentimental and starts sounding like strategy: dignity.

Jeff Cardenas is the one who says it most. He moves quickly when he talks, as if there are only so many breaths before the demo window closes, but the word slows him. Dignity. He says it with the persistence of an engineer and the stubbornness of a grandson. Both of his grandfathers were war heroes, the kind of men who could tie a rope with their eyes closed and a hand in a sling. For years they didn’t need anyone. Then, in their final seasons, they needed everyone. The bathroom became a negotiation. A shirt, an adversary. “To watch proud men forced into total dependency,” he says, “was to watch their dignity collapse.”

A robot, he thinks, can give some of that back. No sigh at 3 a.m. No opinion about the smell of a body that has been ill for too long. No making a nurse late for the next room. The machine has no ego. It does not collect small resentments. It will never tell a friend over coffee what it had to do for you. If dignity is partly autonomy, the argument goes, then autonomy might be partly engineered.

There is, of course, a domestic irony humming in the background. The week Cardenas was scheduled to sit for an interview about a future of household humanoids, a human arrived in his own household ahead of schedule: a baby girl. Two creations, two needs. One cries, one hums. One exhausts you into sleeplessness; the other promises to be tireless so you can rest. Perhaps that tension—between what we make and who we make—is the essay we keep writing in every age. It is, at minimum, the ethical prompt for the engineering to follow.

In the lab, empathy is equipment. Apollo’s body is a lattice of proprietary actuators—the muscles—and a tangle of sensors—the nerves. Cameras for eyes, force feedback in the hands, gyros whispering balance, accelerometers keeping score of every tilt. The old robots were position robots: go here, stop there, open, close, repeat until someone hit the red button. Apollo lives in a different grammar. It isn’t memorizing a path through space; it’s listening, constantly, to the body it carries and the moment it enters. It can’t afford to be brittle. Brittleness drops the cup. And the patient.

But muscle and nerve require a brain, and for that Apptronik has made a pragmatic peace with the present: Google DeepMind is the partner for the mind. A decade ago, “humanoid” was a dirty word in Mountain View—too soon, too much. Now the bet is that a robot shaped like us can learn from us, not only in principle but in practice. Generative AI, so adept at turning words into words and images into images, now tries to learn movement by watching. Show it a person steadying a frail arm. Show it again. Give it the perspective of a sensor array; let it taste gravity through a gyroscope. The hope is that the skill transfers. The hope is that the world’s largest training set—human life—can be translated into action without scripts.

This is where the prose threatens to float away on its own optimism, and where Apptronik pulls it back with a price. Less than a luxury car, they say. Under $50,000, once the supply chain exists. They like first principles—aluminum is cheap, and there are only a few hundred dollars of it in the frame. Batteries have ridden down the cost curve on the back of cars; motors rode it down on the back of drones. The math is meant to short-circuit disbelief: compassion at scale is not only possible; it may be affordable.

Not today. Today, Apollo earns its keep in the places compassion is an accounting line: warehouses and factories. The partners—GXO, Mercedes—sound like waypoints on the long gray bridge to the bedside. If the robot can move boxes without breaking a wrist, maybe it can later move a human without breaking trust. The lab keeps its metaphors comforting: a pianist running scales before attempting the nocturne. Still, the nocturne is the point.

What changes when the machine crosses a threshold and the space smells like hand soap and evening soup? Warehouse floors are taped and square; homes are not. Homes are improvisations of furniture and mood and politics. The job shifts from lifting to witnessing. A perfect employee becomes a perfect observer. Cameras are not “eyes” in a home; they are records. To invite a machine into a room is to invite a log of the room. The promise of dignity—the mercy of not asking another person to do what shames you—meets the chill of being watched perfectly.

“Trust is the long-term battle,” Cardenas says, not as a slogan but like someone naming the boss level in a game with only one life. Companies have slogans about privacy. People have rules: who gets a key, who knows where the blanket is. Does a robot get a key? Does it remember where you hide the letter from the old friend? The engineers will answer, rightly, that these are solvable problems—air-gapped systems, on-device processing, audit logs. The heart will answer, not wrongly, that solvable is not the same as solved.

Then there is the bigger shadow. Cardenas calls humanoid robotics “the space race of our time,” and the analogy is less breathless than it sounds. Space wasn’t about stars; it was about order. The Moon was a stage for policy. In this script the rocket is a humanoid—replicable labor, general-purpose motion—and the nation that deploys a million of them first rewrites the math of productivity. China has poured capital into robotics; some of its companies share data and designs in a way U.S. rivals—each a separate species in a crowded ecosystem—do not. One country is trying to build a forest; the other, a bouquet. The metaphor is unfair and therefore, in the compressed logic of arguments, persuasive.

He reduces it to a line that is either obvious or terrifying. What is an economy? Productivity per person. Change the number of productive units and you change the economy. If a robot is, in practice, a unit, it will be counted. That doesn’t make it a citizen. It makes it a denominator. And once it’s in the denominator, it is in the policy.

This is the point where the skeptic clears his throat. We have heard this promise before—in the eighties, the nineties, the 2000s. We have seen Optimus and its cousins, and the men who owned them. We know the edited video, the cropped wire, the demo that never leaves the demo. We know how stubborn carpets can be and how doors, innocent as they seem, have a way of humiliating machines.

The lab knows this better than anyone. On the third lift of the morning, Apollo’s wrist overshoots with a faint metallic snap, the servo stuttering as it corrects. The mannequin’s elbow jerks, too quick, and an engineer’s breath catches in the silence. A tiny tweak. Again. “Yes,” someone says, almost to avoid saying “please.” Again.

What keeps the room honest is not the demo. It’s the memory you carry into it. Everyone has one: a grandmother who insisted she didn’t need help until she slid to the kitchen floor and refused to call it a fall; a father who couldn’t stand the indignity of a hand on his waistband; the friend who became a quiet inventory of what he could no longer do alone. The argument for a robot at the bedside lives in those rooms—in the hour when help is heavy and kindness is too human to be invisible.

But dignity is a duet word. It means independence. It also means being treated like a person. A perfect lift that leaves you feeling handled may be less dignified than an imperfect lift performed by a nurse who knows your dog’s name and laughs at your old jokes. Some people will choose privacy over presence every time. Others want the tremor in the human hand because it’s a sign that someone is afraid to hurt them. There is a universe of ethics in that tremor.

The money is not bashful about picking a side. Investors like markets that look like graphs and revolutions that can be amortized—unlike a nurse’s memory of the patient who loved a certain song, which lingers, resists, refuses to be tallied. If a robot can deliver the “last great service”—to borrow a phrase from a theologian who wasn’t thinking of robots—it will attract capital because the service can be repeated without running out of love, patience, or hours. The price point matters not only because it makes the machine seem plausible in a catalog but because it promises a shift in who gets help. A family that cannot afford round-the-clock care might afford a tireless assistant for the night shift. The machine will not call in sick. It will not gossip. It will not quit. It will, of course, fail, and those failures will be as intimate as its successes.

There are imaginable safeguards. A local brain that forgets what it doesn’t need to know. A green light you can see when the camera is on. Clear policies about where data goes and who can ask for it and how long it lives. An emergency override you can use without being a systems administrator at three in the morning. None of these will quiet the unease entirely. Unease is the tax we pay for bringing a new witness into the house.

And yet—watch closely—the room keeps coaching the robot toward a kind of grace. Engineers insist this isn’t poetry; it’s control theory. They talk about torque and closed loops and compliance control, about the way a hand can be strong by being soft. But if you mute the jargon, you hear something else: a search for a tempo that reads as care. The difference between a shove and a support is partly physics and partly music. A breath between actions signals attention. A tiny pause at the top of the lift says: I am with you. Apollo cannot mean that. But it can perform it. When it does, the engineers get quiet in the way people do in chapels and concert halls, the secular places where we admit that precision can pass for grace and that grace is, occasionally, a kind of precision.

There is an old superstition in technology: every new machine arrives with a mirror for the person who fears it most. The mirror in this lab shows two figures. In the first: a patient who would rather accept the cold touch of aluminum than the pity of a stranger. In the second: a nurse who knows that skill is not love but that love, in her line of work, often sounds like skill. The mirror does not choose. It simply refuses to lie.

The machine will steady a trembling arm, and we will learn a new word for the mix of gratitude and suspicion that touches the back of the neck when help arrives without a heartbeat. It is the geometry of tenderness, rendered in aluminum. A question with hands.

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

SCIENTIFIC AMERICAN MAGAZINE – OCTOBER 2025

SCIENTIFIC AMERICAN MAGAZINE: The latest issue features ‘Voyage to Nowhere’

How a Billionaire’s Plan to Reach Another Star Fell Apart

An abandoned plan to visit another star highlights the perils of billionaire-funded science

When the Rain Pours, the Mountains Move

As warming temperatures bring more extreme rain to the mountains, debris flows are on the rise

New Fossils Could Help Solve Long-standing Mystery of Bird Migration

Tiny fossils hint at when birds began making their mind-blowing journey to the Arctic to breed

THE NEW ATLANTIS — AUTUMN 2025 ISSUE

THE NEW ATLANTIS MAGAZINE: The latest issue features….

What Comes After Gender Affirmation?

Making transition the first-line treatment for children was a mistake, many health agencies now say. A growing group of psychologists wants to restore the therapeutic relationship.

Two Hundred Years to Flatten the Curve

How generations of meddlesome public health campaigns changed everyday life — and made life twice as long as it used to be

Why We Are Better Off Than a Century Ago

Our ancestors built grand public systems to conquer hunger, thirst, darkness, and squalor. That progress can be lost if we forget it.

MIT TECHNOLOGY REVIEW – SEPT/OCT 2025 PREVIEW

MIT TECHNOLOGY REVIEW: The Security issue issue – Security can mean national defense, but it can also mean control over data, safety from intrusion, and so much more. This issue explores the way technology, mystery, and the universe itself affect how secure we feel in the modern age.

How these two brothers became go-to experts on America’s “mystery drone” invasion

Two Long Island UFO hunters have been called upon by some domestic law enforcement to investigate unexplained phenomena.

Why Trump’s “golden dome” missile defense idea is another ripped straight from the movies

President Trump has proposed building an antimissile “golden dome” around the United States. But do cinematic spectacles actually enhance national security?

Inside the hunt for the most dangerous asteroid ever

As space rock 2024 YR4 became more likely to hit Earth than anything of its size had ever been before, scientists all over the world mobilized to protect the planet.

Taiwan’s “silicon shield” could be weakening

Semiconductor powerhouse TSMC is under increasing pressure to expand abroad and play a security role for the island. Those two roles could be in tension.