Tag Archives: Technology

THE CODE AND THE CANDLE

A Computer Scientist’s Crisis of Certainty

When Ada signed up for The Decline and Fall of the Roman Empire, she thought it would be an easy elective. Instead, Gibbon’s ghost began haunting her code—reminding her that doubt, not data, is what keeps civilization from collapse.

By Michael Cummins | October 2025

It was early autumn at Yale, the air sharp enough to make the leaves sound brittle underfoot. Ada walked fast across Old Campus, laptop slung over her shoulder, earbuds in, mind already halfway inside a problem set. She believed in the clean geometry of logic. The only thing dirtying her otherwise immaculate schedule was an “accidental humanities” elective: The Decline and Fall of the Roman Empire. She’d signed up for it on a whim, liking the sterile irony of the title—an empire, an algorithm; both grand systems eventually collapsing under their own logic.

The first session felt like an intrusion from another world. The professor, an older woman with the calm menace of a classicist, opened her worn copy and read aloud:

History is little more than the register of the crimes, follies, and misfortunes of mankind.

A few students smiled. Ada laughed softly, then realized no one else had. She was used to clean datasets, not registers of folly. But something in the sentence lingered—its disobedience to progress, its refusal of polish. It was a sentence that didn’t believe in optimization.

That night she searched Gibbon online. The first scanned page glowed faintly on her screen, its type uneven, its tone strangely alive. The prose was unlike anything she’d seen in computer science: ironic, self-aware, drenched in the slow rhythm of thought. It seemed to know it was being read centuries later—and to expect disappointment. She felt the cool, detached intellect of the Enlightenment reaching across the chasm of time, not to congratulate the future, but to warn it.

By the third week, she’d begun to dread the seminar’s slow dismantling of her faith in certainty. The professor drew connections between Gibbon and the great philosophers of his age: Voltaire, Montesquieu, and, most fatefully, Descartes—the man Gibbon distrusted most.

“Descartes,” the professor said, chalk squeaking against the board, “wanted knowledge to be as perfect and distinct as mathematics. Gibbon saw this as the ultimate victory of reason—the moment when Natural Philosophy and Mathematics sat on the throne, viewing their sisters—the humanities—prostrated before them.”

The room laughed softly at the image. Ada didn’t. She saw it too clearly: science crowned, literature kneeling, history in chains.

Later, in her AI course, the teaching assistant repeated Descartes without meaning to. “Garbage in, garbage out,” he said. “The model is only as clean as the data.” It was the same creed in modern syntax: mistrust what cannot be measured. The entire dream of algorithmic automation began precisely there—the attempt to purify the messy, probabilistic human record into a series of clear and distinct facts.

Ada had never questioned that dream. Until now. The more she worked on systems designed for prediction—for telling the world what must happen—the more she worried about their capacity to remember what did happen, especially if it was inconvenient or irrational.

When the syllabus turned to Gibbon’s Essay on the Study of Literature—his obscure 1761 defense of the humanities—she expected reverence for Latin, not rebellion against logic. What she found startled her:

At present, Natural Philosophy and Mathematics are seated on the throne, from which they view their sisters prostrated before them.

He was warning against what her generation now called technological inevitability. The mathematician’s triumph, Gibbon suggested, would become civilization’s temptation: the worship of clarity at the expense of meaning. He viewed this rationalist arrogance as a new form of tyranny. Rome fell to political overreach; a new civilization, he feared, would fall to epistemic overreach.

He argued that the historian’s task was not to prove, but to weigh.

He never presents his conjectures as truth, his inductions as facts, his probabilities as demonstrations.

The words felt almost scandalous. In her lab, probability was a problem to minimize; here, it was the moral foundation of knowledge. Gibbon prized uncertainty not as weakness but as wisdom.

If the inscription of a single fact be once obliterated, it can never be restored by the united efforts of genius and industry.

He meant burned parchment, but Ada read lost data. The fragility of the archive—his or hers—suddenly seemed the same. The loss he described was not merely factual but moral: the severing of the link between evidence and human memory.

One gray afternoon she visited the Beinecke Library, that translucent cube where Yale keeps its rare books like fossils of thought. A librarian, gloved and wordless, placed a slim folio before her—an early printing of Gibbon’s Essay. Its paper smelled faintly of dust and candle smoke. She brushed her fingertips along the edge, feeling the grain rise like breath. The marginalia curled like vines, a conversation across centuries. In the corner, a long-dead reader had written in brown ink:

Certainty is a fragile empire.

Ada stared at the line. This was not data. This was memory—tactile, partial, uncompressible. Every crease and smudge was an argument against replication.

Back in the lab, she had been training a model on Enlightenment texts—reducing history to vectors, elegance to embeddings. Gibbon would have recognized the arrogance.

Books may perish by accident, but they perish more surely by neglect.

His warning now felt literal: the neglect was no longer of reading, but of understanding the medium itself.

Mid-semester, her crisis arrived quietly. During a team meeting in the AI lab, she suggested they test a model that could tolerate contradiction.

“Could we let the model hold contradictory weights for a while?” she asked. “Not as an error, but as two competing hypotheses about the world?”

Her lab partner blinked. “You mean… introduce noise?”

Ada hesitated. “No. I mean let it remember that it once believed something else. Like historical revisionism, but internal.”

The silence that followed was not hostile—just uncomprehending. Finally someone said, “That’s… not how learning works.” Ada smiled thinly and turned back to her screen. She realized then: the machine was not built to doubt. And if they were building it in their own image, maybe neither were they.

That night, unable to sleep, she slipped into the library stacks with her battered copy of The Decline and Fall. She read slowly, tracing each sentence like a relic. Gibbon described the burning of the Alexandrian Library with a kind of restrained grief.

The triumph of ignorance, he called it.

He also reserved deep scorn for the zealots who preferred dogma to documents—a scorn that felt disturbingly relevant to the algorithmic dogma that preferred prediction to history. She saw the digital age creating a new kind of fanaticism: the certainty of the perfectly optimized model. She wondered if the loss of a physical library was less tragic than the loss of the intellectual capacity to disagree with the reigning system.

She thought of a specific project she’d worked on last summer: a predictive policing algorithm trained on years of arrest data. The model was perfectly efficient at identifying high-risk neighborhoods—but it was also perfectly incapable of questioning whether the underlying data was itself a product of bias. It codified past human prejudice into future technological certainty. That, she realized, was the triumph of ignorance Gibbon had feared: reason serving bias, flawlessly.

By November, she had begun to map Descartes’ dream directly onto her own field. He had wanted to rebuild knowledge from axioms, purged of doubt. AI engineers called it initializing from zero. Each model began in ignorance and improved through repetition—a mind without memory, a scholar without history.

The present age of innovation may appear to be the natural effect of the increasing progress of knowledge; but every step that is made in the improvement of reason, is likewise a step towards the decay of imagination.

She thought of her neural nets—how each iteration improved accuracy but diminished surprise. The cleaner the model, the smaller the world.

Winter pressed down. Snow fell between the Gothic spires, muffling the city. For her final paper, Ada wrote what she could no longer ignore. She called it The Fall of Interpretation.

Civilizations do not fall when their infrastructures fail. They fall when their interpretive frameworks are outsourced to systems that cannot feel.

She traced a line from Descartes to data science, from Gibbon’s defense of folly to her own field’s intolerance for it. She quoted his plea to “conserve everything preciously,” arguing that the humanities were not decorative but diagnostic—a culture’s immune system against epistemic collapse.

The machine cannot err, and therefore cannot learn.

When she turned in the essay, she added a note to herself at the top: Feels like submitting a love letter to a dead historian. A week later the professor returned it with only one comment in the margin: Gibbon for the age of AI. Keep going.

By spring, she read Gibbon the way she once read code—line by line, debugging her own assumptions. He was less historian than ethicist.

Truth and liberty support each other: by banishing error, we open the way to reason.

Yet he knew that reason without humility becomes tyranny. The archive of mistakes was the record of what it meant to be alive. The semester ended, but the disquiet didn’t. The tyranny of reason, she realized, was not imposed—it was invited. Its seduction lay in its elegance, in its promise to end the ache of uncertainty. Every engineer carried a little Descartes inside them. She had too.

After finals, she wandered north toward Science Hill. Behind the engineering labs, the server farm pulsed with a constant electrical murmur. Through the glass wall she saw the racks of processors glowing blue in the dark. The air smelled faintly of ozone and something metallic—the clean, sterile scent of perfect efficiency.

She imagined Gibbon there, candle in hand, examining the racks as if they were ruins of a future Rome.

Let us conserve everything preciously, for from the meanest facts a Montesquieu may unravel relations unknown to the vulgar.

The systems were designed to optimize forgetting—their training loops overwriting their own memory. They remembered everything and understood nothing. It was the perfect Cartesian child.

Standing there, Ada didn’t want to abandon her field; she wanted to translate it. She resolved to bring the humanities’ ethics of doubt into the language of code—to build models that could err gracefully, that could remember the uncertainty from which understanding begins. Her fight would be for the metadata of doubt: the preservation of context, irony, and intention that an algorithm so easily discards.

When she imagined the work ahead—the loneliness of it, the resistance—she thought again of Gibbon in Lausanne, surrounded by his manuscripts, writing through the night as the French Revolution smoldered below.

History is little more than the record of human vanity corrected by the hand of time.

She smiled at the quiet justice of it.

Graduation came and went. The world, as always, accelerated. But something in her had slowed. Some nights, in the lab where she now worked, when the fans subsided and the screens dimmed to black, she thought she heard a faint rhythm beneath the silence—a breathing, a candle’s flicker.

She imagined a future archaeologist decoding the remnants of a neural net, trying to understand what it had once believed. Would they see our training data as scripture? Our optimization logs as ideology? Would they wonder why we taught our machines to forget? Would they find the metadata of doubt she had fought to embed?

The duty of remembrance, she realized, was never done. For Gibbon, the only reliable constant was human folly; for the machine, it was pattern. Civilizations endure not by their monuments but by their memory of error. Gibbon’s ghost still walks ahead of us, whispering that clarity is not truth, and that the only true ruin is a civilization that has perfectly organized its own forgetting.

The fall of Rome was never just political. It was the moment the human mind mistook its own clarity for wisdom. That, in every age, is where the decline begins.

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

TENDER GEOMETRY

How a Texas robot named Apollo became a meditation on dignity, dependence, and the future of care.

This essay is inspired by an episode of the WSJ Bold Names podcast (September 26, 2025), in which Christopher Mims and Tim Higgins speak with Jeff Cardenas, CEO of Apptronik. While the podcast traces Apollo’s business and technical promise, this meditation follows the deeper question at the heart of humanoid robotics: what does it mean to delegate dignity itself?

By Michael Cummins, Editor, September 26, 2025


The robot stands motionless in a bright Austin lab, catching the fluorescence the way bone catches light in an X-ray—white, clinical, unblinking. Human-height, five foot eight, a little more than a hundred and fifty pounds, all clean lines and exposed joints. What matters is not the size. What matters is the task.

An engineer wheels over a geriatric training mannequin—slack limbs, paper skin, the posture of someone who has spent too many days watching the ceiling. With a gesture the engineer has practiced until it feels like superstition, he cues the robot forward.

Apollo bends.

The motors don’t roar; they murmur, like a refrigerator. A camera blinks; a wrist pivots. Aluminum fingers spread, hesitate, then—lightly, so lightly—close around the mannequin’s forearm. The lift is almost slow enough to be reverent. Apollo steadies the spine, tips the chin, makes a shelf of its palm for the tremor the mannequin doesn’t have but real people do. This is not warehouse choreography—no pallets, no conveyor belts. This is rehearsal for something harder: the geometry of tenderness.

If the mannequin stays upright, the room exhales. If Apollo’s grasp has that elusive quality—control without clench—there’s a hush you wouldn’t expect in a lab. The hush is not triumph. It is reckoning: the movement from factory floor to bedside, from productivity to intimacy, from the public square to the room where the curtains are drawn and a person is trying, stubbornly, not to be embarrassed.

Apptronik calls this horizon “assistive care.” The phrase is both clinical and audacious. It’s the third act in a rollout that starts in logistics, passes through healthcare, and ends—if it ever ends—at the bedroom door. You do not get to a sentence like that by accident. You get there because someone keeps repeating the same word until it stops sounding sentimental and starts sounding like strategy: dignity.

Jeff Cardenas is the one who says it most. He moves quickly when he talks, as if there are only so many breaths before the demo window closes, but the word slows him. Dignity. He says it with the persistence of an engineer and the stubbornness of a grandson. Both of his grandfathers were war heroes, the kind of men who could tie a rope with their eyes closed and a hand in a sling. For years they didn’t need anyone. Then, in their final seasons, they needed everyone. The bathroom became a negotiation. A shirt, an adversary. “To watch proud men forced into total dependency,” he says, “was to watch their dignity collapse.”

A robot, he thinks, can give some of that back. No sigh at 3 a.m. No opinion about the smell of a body that has been ill for too long. No making a nurse late for the next room. The machine has no ego. It does not collect small resentments. It will never tell a friend over coffee what it had to do for you. If dignity is partly autonomy, the argument goes, then autonomy might be partly engineered.

There is, of course, a domestic irony humming in the background. The week Cardenas was scheduled to sit for an interview about a future of household humanoids, a human arrived in his own household ahead of schedule: a baby girl. Two creations, two needs. One cries, one hums. One exhausts you into sleeplessness; the other promises to be tireless so you can rest. Perhaps that tension—between what we make and who we make—is the essay we keep writing in every age. It is, at minimum, the ethical prompt for the engineering to follow.

In the lab, empathy is equipment. Apollo’s body is a lattice of proprietary actuators—the muscles—and a tangle of sensors—the nerves. Cameras for eyes, force feedback in the hands, gyros whispering balance, accelerometers keeping score of every tilt. The old robots were position robots: go here, stop there, open, close, repeat until someone hit the red button. Apollo lives in a different grammar. It isn’t memorizing a path through space; it’s listening, constantly, to the body it carries and the moment it enters. It can’t afford to be brittle. Brittleness drops the cup. And the patient.

But muscle and nerve require a brain, and for that Apptronik has made a pragmatic peace with the present: Google DeepMind is the partner for the mind. A decade ago, “humanoid” was a dirty word in Mountain View—too soon, too much. Now the bet is that a robot shaped like us can learn from us, not only in principle but in practice. Generative AI, so adept at turning words into words and images into images, now tries to learn movement by watching. Show it a person steadying a frail arm. Show it again. Give it the perspective of a sensor array; let it taste gravity through a gyroscope. The hope is that the skill transfers. The hope is that the world’s largest training set—human life—can be translated into action without scripts.

This is where the prose threatens to float away on its own optimism, and where Apptronik pulls it back with a price. Less than a luxury car, they say. Under $50,000, once the supply chain exists. They like first principles—aluminum is cheap, and there are only a few hundred dollars of it in the frame. Batteries have ridden down the cost curve on the back of cars; motors rode it down on the back of drones. The math is meant to short-circuit disbelief: compassion at scale is not only possible; it may be affordable.

Not today. Today, Apollo earns its keep in the places compassion is an accounting line: warehouses and factories. The partners—GXO, Mercedes—sound like waypoints on the long gray bridge to the bedside. If the robot can move boxes without breaking a wrist, maybe it can later move a human without breaking trust. The lab keeps its metaphors comforting: a pianist running scales before attempting the nocturne. Still, the nocturne is the point.

What changes when the machine crosses a threshold and the space smells like hand soap and evening soup? Warehouse floors are taped and square; homes are not. Homes are improvisations of furniture and mood and politics. The job shifts from lifting to witnessing. A perfect employee becomes a perfect observer. Cameras are not “eyes” in a home; they are records. To invite a machine into a room is to invite a log of the room. The promise of dignity—the mercy of not asking another person to do what shames you—meets the chill of being watched perfectly.

“Trust is the long-term battle,” Cardenas says, not as a slogan but like someone naming the boss level in a game with only one life. Companies have slogans about privacy. People have rules: who gets a key, who knows where the blanket is. Does a robot get a key? Does it remember where you hide the letter from the old friend? The engineers will answer, rightly, that these are solvable problems—air-gapped systems, on-device processing, audit logs. The heart will answer, not wrongly, that solvable is not the same as solved.

Then there is the bigger shadow. Cardenas calls humanoid robotics “the space race of our time,” and the analogy is less breathless than it sounds. Space wasn’t about stars; it was about order. The Moon was a stage for policy. In this script the rocket is a humanoid—replicable labor, general-purpose motion—and the nation that deploys a million of them first rewrites the math of productivity. China has poured capital into robotics; some of its companies share data and designs in a way U.S. rivals—each a separate species in a crowded ecosystem—do not. One country is trying to build a forest; the other, a bouquet. The metaphor is unfair and therefore, in the compressed logic of arguments, persuasive.

He reduces it to a line that is either obvious or terrifying. What is an economy? Productivity per person. Change the number of productive units and you change the economy. If a robot is, in practice, a unit, it will be counted. That doesn’t make it a citizen. It makes it a denominator. And once it’s in the denominator, it is in the policy.

This is the point where the skeptic clears his throat. We have heard this promise before—in the eighties, the nineties, the 2000s. We have seen Optimus and its cousins, and the men who owned them. We know the edited video, the cropped wire, the demo that never leaves the demo. We know how stubborn carpets can be and how doors, innocent as they seem, have a way of humiliating machines.

The lab knows this better than anyone. On the third lift of the morning, Apollo’s wrist overshoots with a faint metallic snap, the servo stuttering as it corrects. The mannequin’s elbow jerks, too quick, and an engineer’s breath catches in the silence. A tiny tweak. Again. “Yes,” someone says, almost to avoid saying “please.” Again.

What keeps the room honest is not the demo. It’s the memory you carry into it. Everyone has one: a grandmother who insisted she didn’t need help until she slid to the kitchen floor and refused to call it a fall; a father who couldn’t stand the indignity of a hand on his waistband; the friend who became a quiet inventory of what he could no longer do alone. The argument for a robot at the bedside lives in those rooms—in the hour when help is heavy and kindness is too human to be invisible.

But dignity is a duet word. It means independence. It also means being treated like a person. A perfect lift that leaves you feeling handled may be less dignified than an imperfect lift performed by a nurse who knows your dog’s name and laughs at your old jokes. Some people will choose privacy over presence every time. Others want the tremor in the human hand because it’s a sign that someone is afraid to hurt them. There is a universe of ethics in that tremor.

The money is not bashful about picking a side. Investors like markets that look like graphs and revolutions that can be amortized—unlike a nurse’s memory of the patient who loved a certain song, which lingers, resists, refuses to be tallied. If a robot can deliver the “last great service”—to borrow a phrase from a theologian who wasn’t thinking of robots—it will attract capital because the service can be repeated without running out of love, patience, or hours. The price point matters not only because it makes the machine seem plausible in a catalog but because it promises a shift in who gets help. A family that cannot afford round-the-clock care might afford a tireless assistant for the night shift. The machine will not call in sick. It will not gossip. It will not quit. It will, of course, fail, and those failures will be as intimate as its successes.

There are imaginable safeguards. A local brain that forgets what it doesn’t need to know. A green light you can see when the camera is on. Clear policies about where data goes and who can ask for it and how long it lives. An emergency override you can use without being a systems administrator at three in the morning. None of these will quiet the unease entirely. Unease is the tax we pay for bringing a new witness into the house.

And yet—watch closely—the room keeps coaching the robot toward a kind of grace. Engineers insist this isn’t poetry; it’s control theory. They talk about torque and closed loops and compliance control, about the way a hand can be strong by being soft. But if you mute the jargon, you hear something else: a search for a tempo that reads as care. The difference between a shove and a support is partly physics and partly music. A breath between actions signals attention. A tiny pause at the top of the lift says: I am with you. Apollo cannot mean that. But it can perform it. When it does, the engineers get quiet in the way people do in chapels and concert halls, the secular places where we admit that precision can pass for grace and that grace is, occasionally, a kind of precision.

There is an old superstition in technology: every new machine arrives with a mirror for the person who fears it most. The mirror in this lab shows two figures. In the first: a patient who would rather accept the cold touch of aluminum than the pity of a stranger. In the second: a nurse who knows that skill is not love but that love, in her line of work, often sounds like skill. The mirror does not choose. It simply refuses to lie.

The machine will steady a trembling arm, and we will learn a new word for the mix of gratitude and suspicion that touches the back of the neck when help arrives without a heartbeat. It is the geometry of tenderness, rendered in aluminum. A question with hands.

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

SCIENTIFIC AMERICAN MAGAZINE – OCTOBER 2025

SCIENTIFIC AMERICAN MAGAZINE: The latest issue features ‘Voyage to Nowhere’

How a Billionaire’s Plan to Reach Another Star Fell Apart

An abandoned plan to visit another star highlights the perils of billionaire-funded science

When the Rain Pours, the Mountains Move

As warming temperatures bring more extreme rain to the mountains, debris flows are on the rise

New Fossils Could Help Solve Long-standing Mystery of Bird Migration

Tiny fossils hint at when birds began making their mind-blowing journey to the Arctic to breed

THE NEW ATLANTIS — AUTUMN 2025 ISSUE

THE NEW ATLANTIS MAGAZINE: The latest issue features….

What Comes After Gender Affirmation?

Making transition the first-line treatment for children was a mistake, many health agencies now say. A growing group of psychologists wants to restore the therapeutic relationship.

Two Hundred Years to Flatten the Curve

How generations of meddlesome public health campaigns changed everyday life — and made life twice as long as it used to be

Why We Are Better Off Than a Century Ago

Our ancestors built grand public systems to conquer hunger, thirst, darkness, and squalor. That progress can be lost if we forget it.

MIT TECHNOLOGY REVIEW – SEPT/OCT 2025 PREVIEW

MIT TECHNOLOGY REVIEW: The Security issue issue – Security can mean national defense, but it can also mean control over data, safety from intrusion, and so much more. This issue explores the way technology, mystery, and the universe itself affect how secure we feel in the modern age.

How these two brothers became go-to experts on America’s “mystery drone” invasion

Two Long Island UFO hunters have been called upon by some domestic law enforcement to investigate unexplained phenomena.

Why Trump’s “golden dome” missile defense idea is another ripped straight from the movies

President Trump has proposed building an antimissile “golden dome” around the United States. But do cinematic spectacles actually enhance national security?

Inside the hunt for the most dangerous asteroid ever

As space rock 2024 YR4 became more likely to hit Earth than anything of its size had ever been before, scientists all over the world mobilized to protect the planet.

Taiwan’s “silicon shield” could be weakening

Semiconductor powerhouse TSMC is under increasing pressure to expand abroad and play a security role for the island. Those two roles could be in tension.

Culture: New Humanist Magazine – Autumn 2025

The cover of New Humanist's Autumn 2025 issue is an illustration of an astronaut surrounded by stars

NEW HUMANIST MAGAZINE: This issue is all about how the battle over space – playing out unseen above us – concerns us all.

Space and society

In the latest edition of our “Voices” section, we ask five experts – from scientists to philosophers – how to protect space for the benefit of all of humanity.

“When people hear the term ‘space technology’, they tend to picture rocket launches, or maybe missions to the Moon … Other types of space activity with strong social impact tend to get less attention”

The satellite war

We speak to security expert Mark Hilborne about space warfare – and how it could be the deciding factor in the conflict between Russia and Ukraine.

“The public doesn’t understand how much we rely on space as a domain of warfare”

Sexism in space

When Nasa prepared a message to aliens with the Pioneer probes in the 1970s, sexism skewed how they represented humankind. Within the next decade, we may have another chance to send a message deep into space – and this time, we must do better, writes Jess Thomson.

“Only five objects we have crafted here on Earth are now drifting towards infinity, and four of them tell a lie about half of humankind”

American alien

The new Superman movie offers the vision of a kinder, more tolerant United States – saved by an immigrant, in this case a literal alien. But should we really pin our hopes on a superhero?

“Trump has even shared photoshopped images of himself as Superman. The idea that superheroes can save us all, if we just let them break all the rules, is one that the Maga followers find congenial”

SCIENTIFIC AMERICAN MAGAZINE – SEPTEMBER 2025

Scientific American Volume 333, Issue 2 | Scientific American

SCIENTIFIC AMERICAN MAGAZINE: The latest issue features ‘The End of Food Allergies?’ – Life-changing therapies for peanut reactions are already here.

How Your Brain’s Nightly Cleanse Keeps It Healthy

Washing waste from the brain is an essential function of sleep—and it could help ward off dementia BY Lydia Denworth

Can Peanut Allergies Be Cured?

Maryn McKenna

What Happens When an Entire Generation of Scientists Changes Its Mind

Charles C. Mann

How Scientists Finally Learned That Nerves Regrow

Diana Kwon

Plastics Started as a Sustainability Solution. What Went Wrong?

Jen Schwartz

The Universe Is Static. No, Expanding! Wait, Slowing? Oh, Accelerating

Richard Panek

How RNA Unseated DNA as the Most Important Molecule in Your Body

Philip Ball

AI, Smartphones, and the Student Attention Crisis in U.S. Public Schools

By Michael Cummins, Editor, August 19, 2025

In a recent New York Times focus group, twelve public-school teachers described how phones, social media, and artificial intelligence have reshaped the classroom. Tom, a California biology teacher, captured the shift with unsettling clarity: “It’s part of their whole operating schema.” For many students, the smartphone is no longer a tool but an extension of self, fused with identity and cognition.

Rachel, a teacher in New Jersey, put it even more bluntly:

“They’re just waiting to just get back on their phone. It’s like class time is almost just a pause in between what they really want to be doing.”

What these teachers describe is not mere distraction but a transformation of human attention. The classroom, once imagined as a sanctuary for presence and intellectual encounter, has become a liminal space between dopamine hits. Students no longer “use” their phones; they inhabit them.

The Canadian media theorist Marshall McLuhan warned as early as the 1960s that every new medium extends the human body and reshapes perception. “The medium is the message,” he argued — meaning that the form of technology alters our thought more profoundly than its content. If the printed book once trained us to think linearly and analytically, the smartphone has restructured cognition into fragments: alert-driven, socially mediated, and algorithmically tuned.

The philosopher Sherry Turkle has documented this cultural drift in works such as Alone Together and Reclaiming Conversation. Phones, she argues, create a paradoxical intimacy: constant connection yet diminished presence. What the teachers describe in the Times focus group echoes Turkle’s findings — students are physically in class but psychically elsewhere.

This fracture has profound educational stakes. The reading brain that Maryanne Wolf has studied in Reader, Come Home — slow, deep, and integrative — is being supplanted by skimming, scanning, and swiping. And as psychologist Daniel Kahneman showed, our cognition is divided between “fast” intuitive processing (System 1) and “slow” deliberate reasoning (System 2). Phones tilt us heavily toward System 1, privileging speed and reaction over reflection and patience.

The teachers in the focus group thus reveal something larger than classroom management woes: they describe a civilizational shift in the ecology of human attention. To understand what’s at stake, we must see the smartphone not simply as a device but as a prosthetic self — an appendage of memory, identity, and agency. And we must ask, with urgency, whether education can still cultivate wisdom in a world of perpetual distraction.


The Collapse of Presence

The first crisis that phones introduce into the classroom is the erosion of presence. Presence is not just physical attendance but the attunement of mind and spirit to a shared moment. Teachers have always battled distraction — doodles, whispers, glances out the window — but never before has distraction been engineered with billion-dollar precision.

Platforms like TikTok and Instagram are not neutral diversions; they are laboratories of persuasion designed to hijack attention. Tristan Harris, a former Google ethicist, has described them as slot machines in our pockets, each swipe promising another dopamine jackpot. For a student seated in a fluorescent-lit classroom, the comparison is unfair: Shakespeare or stoichiometry cannot compete with an infinite feed of personalized spectacle.

McLuhan’s insight about “extensions of man” takes on new urgency here. If the book extended the eye and trained the linear mind, the phone extends the nervous system itself, embedding the individual into a perpetual flow of stimuli. Students who describe feeling “naked without their phone” are not indulging in metaphor — they are articulating the visceral truth of prosthesis.

The pandemic deepened this fracture. During remote learning, students learned to toggle between school tabs and entertainment tabs, multitasking as survival. Now, back in physical classrooms, many have not relearned how to sit with boredom, struggle, or silence. Teachers describe students panicking when asked to read even a page without their phones nearby.

Maryanne Wolf’s neuroscience offers a stark warning: when the brain is rewired for scanning and skimming, the capacity for deep reading — for inhabiting complex narratives, empathizing with characters, or grappling with ambiguity — atrophies. What is lost is not just literary skill but the very neurological substrate of reflection.

Presence is no longer the default of the classroom but a countercultural achievement.

And here Kahneman’s framework becomes crucial. Education traditionally cultivates System 2 — the slow, effortful reasoning needed for mathematics, philosophy, or moral deliberation. But phones condition System 1: reactive, fast, emotionally charged. The result is a generation fluent in intuition but impoverished in deliberation.


The Wild West of AI

If phones fragment attention, artificial intelligence complicates authorship and authenticity. For teachers, the challenge is no longer merely whether a student has done the homework but whether the “student” is even the author at all.

ChatGPT and its successors have entered the classroom like a silent revolution. Students can generate essays, lab reports, even poetry in seconds. For some, this is liberation: a way to bypass drudgery and focus on synthesis. For others, it is a temptation to outsource thinking altogether.

Sherry Turkle’s concept of “simulation” is instructive here. In Simulation and Its Discontents, she describes how scientists and engineers, once trained on physical materials, now learn through computer models — and in the process, risk confusing the model for reality. In classrooms, AI creates a similar slippage: simulated thought that masquerades as student thought.

Teachers in the Times focus group voiced this anxiety. One noted: “You don’t know if they wrote it, or if it’s ChatGPT.” Assessment becomes not only a question of accuracy but of authenticity. What does it mean to grade an essay if the essay may be an algorithmic pastiche?

The comparison with earlier technologies is tempting. Calculators once threatened arithmetic; Wikipedia once threatened memorization. But AI is categorically different. A calculator does not claim to “think”; Wikipedia does not pretend to be you. Generative AI blurs authorship itself, eroding the very link between student, process, and product.

And yet, as McLuhan would remind us, every technology contains both peril and possibility. AI could be framed not as a substitute but as a collaborator — a partner in inquiry that scaffolds learning rather than replaces it. Teachers who integrate AI transparently, asking students to annotate or critique its outputs, may yet reclaim it as a tool for System 2 reasoning.

The danger is not that students will think less but that they will mistake machine fluency for their own voice.

But the Wild West remains. Until schools articulate norms, AI risks widening the gap between performance and understanding, appearance and reality.


The Inequality of Attention

Phones and AI do not distribute their burdens equally. The third crisis teachers describe is an inequality of attention that maps onto existing social divides.

Affluent families increasingly send their children to private or charter schools that restrict or ban phones altogether. At such schools, presence becomes a protected resource, and students experience something closer to the traditional “deep time” of education. Meanwhile, underfunded public schools are often powerless to enforce bans, leaving students marooned in a sea of distraction.

This disparity mirrors what sociologist Pierre Bourdieu called cultural capital — the non-financial assets that confer advantage, from language to habits of attention. In the digital era, the ability to disconnect becomes the ultimate form of privilege. To be shielded from distraction is to be granted access to focus, patience, and the deep literacy that Wolf describes.

Teachers in lower-income districts report students who cannot imagine life without phones, who measure self-worth in likes and streaks. For them, literacy itself feels like an alien demand — why labor through a novel when affirmation is instant online?

Maryanne Wolf warns that we are drifting toward a bifurcated literacy society: one in which elites preserve the capacity for deep reading while the majority are confined to surface skimming. The consequences for democracy are chilling. A polity trained only in System 1 thinking will be perpetually vulnerable to manipulation, propaganda, and authoritarian appeals.

The inequality of attention may prove more consequential than the inequality of income.

If democracy depends on citizens capable of deliberation, empathy, and historical memory, then the erosion of deep literacy is not a classroom problem but a civic emergency. Education cannot be reduced to test scores or job readiness; it is the training ground of the democratic imagination. And when that imagination is fractured by perpetual distraction, the republic itself trembles.


Reclaiming Focus in the Classroom

What, then, is to be done? The teachers’ testimonies, amplified by McLuhan, Turkle, Wolf, and Kahneman, might lead us toward despair. Phones colonize attention; AI destabilizes authorship; inequality corrodes the very ground of democracy. But despair is itself a form of surrender, and teachers cannot afford surrender.

Hope begins with clarity. We must name the problem not as “kids these days” but as a structural transformation of attention. To expect students to resist billion-dollar platforms alone is naive; schools must become countercultural sanctuaries where presence is cultivated as deliberately as literacy.

Practical steps follow. Schools can implement phone-free policies, not as punishment but as liberation — an invitation to reclaim time. Teachers can design “slow pedagogy” moments: extended reading, unbroken dialogue, silent reflection. AI can be reframed as a tool for meta-cognition, with students asked not merely to use it but to critique it, to compare its fluency with their own evolving voice.

Above all, we must remember that education is not simply about information transfer but about formation of the self. McLuhan’s dictum reminds us that the medium reshapes the student as much as the message. If we allow the medium of the phone to dominate uncritically, we should not be surprised when students emerge fragmented, reactive, and estranged from presence.

And yet, history offers reassurance. Plato once feared that writing itself would erode memory; medieval teachers once feared the printing press would dilute authority. Each medium reshaped thought, but each also produced new forms of creativity, knowledge, and freedom. The task is not to romanticize the past but to steward the present wisely.

Hannah Arendt, reflecting on education, insisted that every generation is responsible for introducing the young to the world as it is — flawed, fragile, yet redeemable. To abdicate that responsibility is to abandon both children and the world itself. Teachers today, facing the prosthetic selves of their students, are engaged in precisely this work: holding open the possibility of presence, of deep thought, of human encounter, against the centrifugal pull of the screen.

Education is the wager that presence can be cultivated even in an age of absence.

In the end, phones may be prosthetic selves — but they need not be destiny. The prosthesis can be acknowledged, critiqued, even integrated into a richer conception of the human. What matters is that students come to see themselves not as appendages of the machine but as agents capable of reflection, relationship, and wisdom.

The future of education — and perhaps democracy itself — depends on this wager. That in classrooms across America, teachers and students together might still choose presence over distraction, depth over skimming, authenticity over simulation. It is a fragile hope, but a necessary one.

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

Responsive Elegance: AI’s Fashion Revolution

Responsive Elegance: How AI Is Rewriting the Code of Luxury Fashion
From Prada’s neural silhouettes to Hermès’ algorithmic resistance, a new aesthetic regime emerges—where beauty is no longer just crafted, but computed.

By Michael Cummins, Editor, August 18, 2025

The atelier no longer glows with candlelight, nor hums with the quiet labor of hand-stitching—it pulses with data. Fashion, once the domain of intuition, ritual, and artisanal mastery, is being reshaped by artificial intelligence. Algorithms now whisper what beauty should look like, trained not on muses but on millions of images, trends, and cultural signals. The designer’s sketchbook has become a neural network; the runway, a reflection of predictive modeling—beauty, now rendered in code.

This transformation is not speculative—it’s unfolding in real time. Prada has explored AI tools to remix archival silhouettes with contemporary streetwear aesthetics. Burberry uses machine learning to forecast regional preferences and tailor collections to cultural nuance. LVMH, the world’s largest luxury conglomerate, has declared AI a strategic infrastructure, integrating it across its seventy-five maisons to optimize supply chains, personalize client experiences, and assist in creative ideation. Meanwhile, Hermès resists the wave, preserving opacity, restraint, and human discretion.

At the heart of this shift are two interlocking innovations: generative design, where AI produces visual forms based on input parameters, and predictive styling, which anticipates consumer desires through data. Together, they mark a new aesthetic regime—responsive elegance—where beauty is calibrated to cultural mood and optimized for relevance.

But what is lost in this optimization? Can algorithmic chic retain the aura of the original? Does prediction flatten surprise?

Generative Design & Predictive Styling: Fashion’s New Operating System

Generative design and predictive styling are not mere tools—they are provocations. They challenge the very foundations of fashion’s creative process, shifting the locus of authorship from the human hand to the algorithmic eye.

Generative design uses neural networks and evolutionary algorithms to produce visual outputs based on input parameters. In fashion, this means feeding the machine with data: historical collections, regional aesthetics, streetwear archives, and abstract mood descriptors. The algorithm then generates design options that reflect emergent patterns and cultural resonance.

Prada, known for its intellectual rigor, has experimented with such approaches. Analysts at Business of Fashion note that AI-driven archival remixing allows Prada to analyze past collections and filter them through contemporary preference data, producing silhouettes that feel both nostalgic and hyper-contemporary. A 1990s-inspired line recently drew on East Asian streetwear influences, creating garments that seemed to arrive from both memory and futurity at once.

Predictive styling, meanwhile, anticipates consumer desires by analyzing social media sentiment, purchasing behavior, influencer trends, and regional aesthetics. Burberry employs such tools to refine color palettes and silhouettes by geography: muted earth tones for Scandinavian markets, tailored minimalism for East Asian consumers. As Burberry’s Chief Digital Officer Rachel Waller told Vogue Business, “AI lets us listen to what customers are already telling us in ways no survey could capture.”

A McKinsey & Company 2024 report concluded:

“Generative AI is not just automation—it’s augmentation. It gives creatives the tools to experiment faster, freeing them to focus on what only humans can do.”

Yet this feedback loop—designing for what is already emerging—raises philosophical questions. Does prediction flatten originality? If fashion becomes a mirror of desire, does it lose its capacity to provoke?

Walter Benjamin, in The Work of Art in the Age of Mechanical Reproduction (1936), warned that mechanical replication erodes the ‘aura’—the singular presence of an artwork in time and space. In AI fashion, the aura is not lost—it is simulated, curated, and reassembled from data. The designer becomes less an originator than a selector of algorithmic possibility.

Still, there is poetry in this logic. Responsive elegance reflects the zeitgeist, translating cultural mood into material form. It is a mirror of collective desire, shaped by both human intuition and machine cognition. The challenge is to ensure that this beauty remains not only relevant—but resonant.

LVMH vs. Hermès: Two Philosophies of Luxury in the Algorithmic Age

The tension between responsive elegance and timeless restraint is embodied in the divergent strategies of LVMH and Hermès—two titans of luxury, each offering a distinct vision of beauty in the age of AI.

LVMH has embraced artificial intelligence as strategic infrastructure. In 2023, it announced a deep partnership with Google Cloud, creating a sophisticated platform that integrates AI across its seventy-five maisons. Louis Vuitton uses generative design to remix archival motifs with trend data. Sephora curates personalized product bundles through machine learning. Dom Pérignon experiments with immersive digital storytelling and packaging design based on cultural sentiment.

Franck Le Moal, LVMH’s Chief Information Officer, describes the conglomerate’s approach as “weaving together data and AI that connects the digital and store experiences, all while being seamless and invisible.” The goal is not automation for its own sake, but augmentation of the luxury experience—empowering client advisors, deepening emotional resonance, and enhancing agility.

As Forbes observed in 2024:

“LVMH sees the AI challenge for luxury not as a technological one, but as a human one. The brands prosper on authenticity and person-to-person connection. Irresponsible use of GenAI can threaten that.”

Hermès, by contrast, resists the algorithmic tide. Its brand strategy is built on restraint, consistency, and long-term value. Hermès avoids e-commerce for many products, limits advertising, and maintains a deliberately opaque supply chain. While it uses AI for logistics and internal operations, it does not foreground AI in client experiences. Its mystique depends on human discretion, not algorithmic prediction.

As Chaotropy’s Luxury Analysis 2025 put it:

“Hermès is not only immune to the coming tsunami of technological innovation—it may benefit from it. In an era of automation, scarcity and craftsmanship become more desirable.”

These two models reflect deeper aesthetic divides. LVMH offers responsive elegance—beauty that adapts to us. Hermès offers elusive beauty—beauty that asks us to adapt to it. One is immersive, scalable, and optimized; the other opaque, ritualistic, and human-centered.

When Machines Dream in Silk: Speculative Futures of AI Luxury

If today’s AI fashion is co-authored, tomorrow’s may be autonomous. As generative design and predictive styling evolve, we inch closer to a future where products are not just assisted by AI—but entirely designed by it.

Louis Vuitton’s “Sentiment Handbag” scrapes global sentiment to reflect the emotional climate of the world. Iridescent textures for optimism, protective silhouettes for anxiety. Fashion becomes emotional cartography.

Sephora’s “AI Skin Atlas” tailors skincare to micro-geographies and genetic lineages. Packaging, scent, and texture resonate with local rituals and biological needs.

Dom Pérignon’s “Algorithmic Vintage” blends champagne based on predictive modeling of soil, weather, and taste profiles. Terroir meets tensor flow.

TAG Heuer’s Smart-AI Timepiece adapts its face to your stress levels and calendar. A watch that doesn’t just tell time—it tells mood.

Bulgari’s AR-enhanced jewelry refracts algorithmic lightplay through centuries of tradition. Heritage collapses into spectacle.

These speculative products reflect a future where responsive elegance becomes autonomous elegance. Designers may become philosopher-curators—stewards of sensibility, shaping not just what the machine sees, but what it dares to feel.

Yet ethical concerns loom. A 2025 study by Amity University warned:

“AI-generated aesthetics challenge traditional modes of design expression and raise unresolved questions about authorship, originality, and cultural integrity.”

To address these risks, the proposed F.A.S.H.I.O.N. AI Ethics Framework suggests principles like Fair Credit, Authentic Context, and Human-Centric Design. These frameworks aim to preserve dignity in design, ensuring that beauty remains not just a product of data, but a reflection of cultural care.

The Algorithm in the Boutique: Two Journeys, Two Futures

In 2030, a woman enters the Louis Vuitton flagship on the Champs-Élysées. The store AI recognizes her walk, gestures, and biometric stress markers. Her past purchases, Instagram aesthetic, and travel itineraries have been quietly parsed. She’s shown a handbag designed for her demographic cluster—and a speculative “future bag” generated from global sentiment. Augmented reality mirrors shift its hue based on fashion chatter.

Across town, a man steps into Hermès on Rue du Faubourg Saint-Honoré. No AI overlay. No predictive styling. He waits while a human advisor retrieves three options from the back room. Scarcity is preserved. Opacity enforced. Beauty demands patience, loyalty, and reverence.

Responsive elegance personalizes. Timeless restraint universalizes. One anticipates. The other withholds.

Ethical Horizons: Data, Desire, and Dignity

As AI saturates luxury, the ethical stakes grow sharper:

Privacy or Surveillance? Luxury thrives on intimacy, but when biometric and behavioral data feed design, where is the line between service and intrusion? A handbag tailored to your mood may delight—but what if that mood was inferred from stress markers you didn’t consent to share?

Cultural Reverence or Algorithmic Appropriation? Algorithms trained on global aesthetics may inadvertently exploit indigenous or marginalized designs without context or consent. This risk echoes past critiques of fast fashion—but now at algorithmic speed, and with the veneer of personalization.

Crafted Scarcity or Generative Excess? Hermès’ commitment to craft-based scarcity stands in contrast to AI’s generative abundance. What happens to luxury when it becomes infinitely reproducible? Does the aura of exclusivity dissolve when beauty is just another output stream?

Philosopher Byung-Chul Han, in The Transparency Society (2012), warns:

“When everything is transparent, nothing is erotic.”

Han’s critique of transparency culture reminds us that the erotic—the mysterious, the withheld—is eroded by algorithmic exposure. In luxury, opacity is not inefficiency—it is seduction. The challenge for fashion is to preserve mystery in an age that demands metrics.

Fashion’s New Frontier


Fashion has always been a mirror of its time. In the age of artificial intelligence, that mirror becomes a sensor—reading cultural mood, forecasting desire, and generating beauty optimized for relevance. Generative design and predictive styling are not just innovations; they are provocations. They reconfigure creativity, decentralize authorship, and introduce a new aesthetic logic.

Yet as fashion becomes increasingly responsive, it risks losing its capacity for rupture—for the unexpected, the irrational, the sublime. When beauty is calibrated to what is already emerging, it may cease to surprise. The algorithm designs for resonance, not resistance. It reflects desire, but does it provoke it?

The contrast between LVMH and Hermès reveals two futures. One immersive, scalable, and optimized; the other opaque, ritualistic, and elusive. These are not just business strategies—they are aesthetic philosophies. They ask us to choose between relevance and reverence, between immediacy and depth.

As AI evolves, fashion must ask deeper questions. Can responsive elegance coexist with emotional gravity? Can algorithmic chic retain the aura of the original? Will future designers be curators of machine imagination—or custodians of human mystery?

Perhaps the most urgent question is not what AI can do, but what it should be allowed to shape. Should it design garments that reflect our moods, or challenge them? Should it optimize beauty for engagement, or preserve it as a site of contemplation? In a world increasingly governed by prediction, the most radical gesture may be to remain unpredictable.

The future of fashion may lie in hybrid forms—where machine cognition enhances human intuition, and where data-driven relevance coexists with poetic restraint. Designers may become philosophers of form, guiding algorithms not toward efficiency, but toward meaning.

In this new frontier, fashion is no longer just what we wear. It is how we think, how we feel, how we respond to a world in flux. And in that response—whether crafted by hand or generated by code—beauty must remain not only timely, but timeless. Not only visible, but visceral. Not only predicted, but profoundly imagined.

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

REBUILDING A BROKEN PATH FROM BOYHOOD TO MAN

By Michael Cummins, Editor, August 14, 2025

Imagine a world where, in a single decade, half the laughter shared between friends vanishes. Imagine a childhood where time spent outdoors is cut by a third and the developmental benefits of reading are diminished by two-thirds. This is not a dystopian fantasy. According to social psychologist Jonathan Haidt, in a “Prof G Podcast with Scott Galloway published on August 14, 2025, it is the stark reality for a generation that has been systematically disconnected from the real world and shackled to the virtual. “We have overprotected our children in the real world,” Haidt argues, “and underprotected them in the virtual world.”

This profound dislocation is the epicenter of a “perfect storm” disproportionately harming boys and young men—a crisis fueled by predatory technology, economic precarity, and the collapse of institutions that once guided them into manhood. It is a crisis, as a growing chorus of thinkers like Haidt, Brookings scholar Richard Reeves, and professor Scott Galloway have illuminated, born not from a single cause, but from a collective, intergenerational failure. It is a betrayal of the implicit promise that each generation will leave the world better for the next, a promise broken by a society that has become, in Galloway’s stark assessment, “a generation of takers, not givers.”

The Digital Dislocation: A Generation Adrift Online

The most abrupt change to the landscape of youth has been technological. Haidt identifies the years between 2010 and 2015 as the “pivot point” when a “play-based childhood” was supplanted by a “phone-based childhood.” This was not a simple evolution from the television sets of the past. The smartphone is a uniquely invasive tool—a supercomputer delivering constant, algorithmically curated interruptions. It extracts data on its user’s deepest desires while creating a feedback loop of social comparison and judgment, resulting in a documented catastrophe for mental health. It is no coincidence that between 2010 and 2021, the suicide rate for American boys aged 10-14 nearly tripled, according to CDC data highlighted by Haidt.

The Lure of the Manosphere

This digital vacuum has been eagerly filled by what Scott Galloway calls the “great white sharks” of the tech industry. The most insidious outcome of their engagement-at-all-costs model is the weaponization of social validation into a system of industrialized shame. “Imagine growing up in a minefield,” Haidt suggests. “You would walk really carefully.” This pervasive fear suppresses healthy risk-taking, a crucial component of adolescent development, particularly for boys who learn competence through trial, error, and recovery.

This isolation is especially damaging for boys who, as scholar Warren Farrell argues, already suffer from a crisis of “dad-deprivation” and a lack of positive male mentorship. “A boy’s search for a father,” Farrell writes in The Boy Crisis, “is a search for a purpose-driven life.” Into this void step not fathers or coaches, but the algorithmic sirens of the “manosphere.” These figures thrive because they offer a counterfeit version of the very thing Farrell identifies as missing: a strong, authoritative male voice providing direction, however misguided. Figures like Andrew Tate have built empires by offering lonely or insecure young men a seductive, off-the-shelf identity, often paired with dubious get-rich-quick schemes that prey directly on their economic anxieties. The algorithms on platforms like TikTok and YouTube are ruthlessly efficient, creating a pipeline that can push a boy from mainstream gaming content to nihilistic or misogynistic ideologies in a matter of weeks. This is not a moral failing of young men; it is the predictable result of a human need for guidance meeting a machine optimized for radicalizing engagement.

The Economic Squeeze: A Broken Promise of Prosperity

This digital betrayal is compounded by an economic one, as the foundational promises of prosperity have been broken for an entire generation. The traditional path to stability—education, career, family, homeownership—has become fractured. As Galloway argues, older generations have effectively “figured out that the downside of democracy is that old people… can continue to vote themselves more money,” leaving the young to face a brutal housing market and stagnant wages. He describes it as a conscious “pulling up of the ladder,” where asset inflation benefits the old at the direct expense of the young.

From Precarious Work to Deaths of Despair

This economic anxiety shatters the “get rich slowly” ethos and replaces it with a desperate search for a shortcut. And in 2018, the state effectively handed this desperate generation a loaded gun in the form of frictionless, legalized sports betting. The Supreme Court decision placed, as Reeves describes it, a “casino in everyone’s pocket,” making gambling dangerously accessible to a demographic of young men who are biologically more prone to risk-taking and socially more isolated than ever. The statistics are damning: young men are the fastest-growing group of problem gamblers, and in states that legalize online betting, bankruptcy filings often spike.

The consequences are existential. This trend is the leading edge of the “deaths of despair” phenomenon identified by economists Anne Case and Angus Deaton, who documented rising mortality among men without college degrees from suicide, overdose, and alcohol-related illness. Their research concluded these deaths were “less about the sting of poverty and more about the pain of a life without meaning.” When a young man, steeped in economic anxiety and disconnected from real-world support, takes a huge financial risk and fails, the shame can be unbearable. Haidt delivers a chillingly direct warning of the foreseeable consequences: “you’re gonna have dead young men.”

The Social Vacuum: An Abandonment of Guidance and Guardrails

Underpinning both the technological and economic crises is a deeper social one: the systematic dismantling of the institutions, norms, and rituals that once guided boys into healthy manhood. Society has become deinstitutionalized, removing the “guardrails” that once channeled youthful energy.

The Crisis in the Classroom

This is acutely visible in education. The modern classroom, with its emphasis on quiet compliance and verbal-emotive skills, is often a poor fit for the learning styles more common in boys. As author Christina Hoff Sommers has argued for years, “For more than a decade, our schools have been enforcing a zero-tolerance policy for any behavior that suggests boyishness.” The result is a widening gender gap at every level. Women now earn nearly 60% of all bachelor’s degrees in the U.S. Boys are more likely to be diagnosed with a learning disability, more likely to face disciplinary action, and have largely abandoned reading for pleasure. We are, in effect, pathologizing boyhood and then wondering why boys are checking out of school.

The Search for Structure

This deinstitutionalization extends beyond the schoolhouse. The decline of institutions like the Boy Scouts, whose membership has plummeted in recent decades, local sports leagues, and church groups has removed arenas for mentorship and character formation. From an anthropological perspective, this is a catastrophic failure. “Wherever you have initiation rights,” Haidt notes, “they’re always harsher, stricter, tougher for boys because it’s a much bigger jump to turn a boy into a man.” This journey requires structure, discipline, and challenge. Yet modern society, in its quest for safety, has stripped away opportunities for healthy risk, leaving boys to “just vegetate.”

Into this vacuum has rushed a toxic cultural narrative that pits the sexes against each other. But the hunger for meaning has not disappeared. Reeves’s powerful anecdote of visiting a Latin Mass in Denver on a Sunday night and finding it “full of young men, most of them on their own,” speaks volumes. They are not seeking chaos; they are desperately searching for “structure and discipline and purpose and institutions that will help them become men.” They are looking for the very things society has stopped providing.

Forging a New Path: A Framework for Renewal

Recognizing this betrayal is the first step. The next is to act. This requires moving past the gender wars and embracing a bold, pro-social agenda to rebuild the structures that turn boys into thriving men.

1. Rebuild the Guardrails: Institutional and Economic Solutions The most immediate need is to create viable, non-collegiate pathways to success and dignity. We must champion a massive expansion of vocational and technical education, celebrating the mastery of a trade as equal in status to a four-year degree. As Mike Rowe, a vocal advocate for skilled labor, has stated, “We are lending money we don’t have to kids who can’t pay it back to train them for jobs that no longer exist. That’s nuts.” Imagine a modern Civilian Conservation Corps, where young men from all backgrounds work side-by-side to rebuild crumbling infrastructure or restore national parks—learning a trade while forging bonds of shared purpose and earning a tangible stake in the country they are helping to build.

2. Create Modern Rites of Passage: Community and Mentorship Communities must step into the void left by failing institutions. This means a national push to fund and expand mentorship programs. Research from MENTOR National shows that at-risk youth with a mentor are 55% more likely to enroll in college and 130% more likely to hold leadership positions. It means local leaders creating their own modern “rites of passage”—challenging, team-based programs that teach resilience, problem-solving, and civic responsibility through tangible projects. As Reeves bluntly puts it, “pain produces growth,” and we must reintroduce healthy, structured struggle back into the lives of boys.

3. A Pro-Social Vision: Redefining Honorable Masculinity The most crucial task is cultural. We must stop telling boys that their innate nature is toxic and instead offer them a noble vision of what it can become. We must define honorable manhood not as domination or material wealth, but as competence, responsibility, and protectiveness. This means redefining competence not just as physical strength, but as technical skill, emotional regulation, and intellectual curiosity. It means redefining protectiveness not just against physical threats, but against the digital and psychological dangers that poison our discourse and harm the vulnerable. It is a masculinity defined by what it builds and who it cares for—the courage to be a provider for one’s family, a pillar of one’s community, and a steward of a just society.

Conclusion: Repairing the Intergenerational Compact

We have stranded a generation of boys in a digital “Guyland,” a perilous limbo between a childhood they were forced to abandon and an adulthood they see no clear path to reaching. We have told them their natural instincts are a problem while simultaneously exposing them to the most predatory, high-risk temptations ever devised. This is more than a crisis; it is a profound societal malpractice.

The choice we face is stark. We can continue our slide into a zero-sum society of horizontal, gendered conflict, or we can recognize this crisis for what it is: a vertical, intergenerational failure that harms everyone. We must have the courage to declare that the well-being of our sons is not in opposition to the well-being of our daughters. As Richard Reeves has said, the goal is to “get to a world which is better for both men and women.” This is not a zero-sum game; it is a positive-sum imperative.

This requires a new intergenerational compact, one rooted in action, not grievance. It demands we stop pathologizing boyhood and start building the institutions that mold it. It requires that we offer our young men not frictionless temptation, but meaningful struggle. It insists that we provide them not with algorithmic influencers, but with real-world mentors who can show them the path to an honorable life.

The hour is late, and the damage is deep. But in the quiet hunger of young men for purpose, in the fierce love of parents for their children, and in the courage of thinkers willing to speak uncomfortable truths, lies the hope that we can yet forge a new path. The work is not to turn back the clock, but to build a better future—one where we finally keep our promise to the next generation.

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI