Tag Archives: Machine Learning

Review: AI, Apathy, and the Arsenal of Democracy

Dexter Filkins is a Pulitzer Prize-winning American journalist and author, known for his extensive reporting on the wars in Afghanistan and Iraq. He is currently a staff writer for The New Yorker and the author of the book “The Forever War“, which chronicles his experiences reporting from these conflict zones. 

Is the United States truly ready for the seismic shift in modern warfare—a transformation that The New Yorker‘s veteran war correspondent describes not as evolution but as rupture? In “Is the U.S. Ready for the Next War?” (July 14, 2025), Dexter Filkins captures this tectonic realignment through a mosaic of battlefield reportage, strategic insight, and ethical reflection. His central thesis is both urgent and unsettling: that America, long mythologized for its martial supremacy, is culturally and institutionally unprepared for the emerging realities of war. The enemy is no longer just a rival state but also time itself—conflict is being rewritten in code, and the old machines can no longer keep pace.

The piece opens with a gripping image: a Ukrainian drone factory producing a thousand airborne machines daily, each costing just $500. Improvised, nimble, and devastating, these drones have inflicted disproportionate damage on Russian forces. Their success signals a paradigm shift—conflict has moved from regiments to swarms, from steel to software. Yet the deeper concern is not merely technological; it is cultural. The article is less a call to arms than a call to reimagine. Victory in future wars, it suggests, will depend not on weaponry alone, but on judgment, agility, and a conscience fit for the digital age.

Speed and Fragmentation: The Collision of Cultures

At the heart of the analysis lies a confrontation between two worldviews. On one side stands Silicon Valley—fast, improvisational, and software-driven. On the other: the Pentagon—layered, cautious, and locked in Cold War-era processes. One of the central figures is Palmer Luckey, the founder of the defense tech company Anduril, depicted as a symbol of insurgent innovation. Once a video game prodigy, he now leads teams designing autonomous weapons that can be manufactured as quickly as IKEA furniture and deployed without extensive oversight. His world thrives on rapid iteration, where warfare is treated like code—modular, scalable, and adaptive.

This approach clashes with the military’s entrenched bureaucracy. Procurement cycles stretch for years. Communication between service branches remains fractured. Even American ships and planes often operate on incompatible systems. A war simulation over Taiwan underscores this dysfunction: satellites failed to coordinate with aircraft, naval assets couldn’t link with space-based systems, and U.S. forces were paralyzed by their own institutional fragmentation. The problem wasn’t technology—it was organization.

What emerges is a portrait of a defense apparatus unable to act as a coherent whole. The fragmentation stems from a structure built for another era—one that now privileges process over flexibility. In contrast, adversaries operate with fluidity, leveraging technological agility as a force multiplier. Slowness, once a symptom of deliberation, has become a strategic liability.

The tension explored here is more than operational; it is civilizational. Can a democratic state tolerate the speed and autonomy now required in combat? Can institutions built for deliberation respond in milliseconds? These are not just questions of infrastructure, but of governance and identity. In the coming conflicts, latency may be lethal, and fragmentation fatal.

Imagination Under Pressure: Lessons from History

To frame the stakes, the essay draws on powerful historical precedents. Technological transformation has always arisen from moments of existential pressure: Prussia’s use of railways to reimagine logistics, the Gulf War’s precision missiles, and, most profoundly, the Manhattan Project. These were not the products of administrative order but of chaotic urgency, unleashed imagination, and institutional risk-taking.

During the Manhattan Project, multiple experimental paths were pursued simultaneously, protocols were bent, and innovation surged from competition. Today, however, America’s defense culture has shifted toward procedural conservatism. Risk is minimized; innovation is formalized. Bureaucracy may protect against error, but it also stifles the volatility that made American defense dynamic in the past.

This critique extends beyond the military. A broader cultural stagnation is implied: a nation that fears disruption more than defeat. If imagination is outsourced to private startups—entities beyond the reach of democratic accountability—strategic coherence may erode. Tactical agility cannot compensate for an atrophied civic center. The essay doesn’t argue for scrapping government institutions, but for reigniting their creative core. Defense must not only be efficient; it must be intellectually alive.

Machines, Morality, and the Shrinking Space for Judgment

Perhaps the most haunting dimension of the essay lies in its treatment of ethics. As autonomous systems proliferate—from loitering drones to AI-driven targeting software—the space for human judgment begins to vanish. Some militaries, like Israel’s, still preserve a “human-in-the-loop” model where a person retains final authority. But this safeguard is fragile. The march toward autonomy is relentless.

The implications are grave. When decisions to kill are handed to algorithms trained on probability and sensor data, who bears responsibility? Engineers? Programmers? Military officers? The author references DeepMind’s Demis Hassabis, who warns of the ease with which powerful systems can be repurposed for malign ends. Yet the more chilling possibility is not malevolence, but moral atrophy: a world where judgment is no longer expected or practiced.

Combat, if rendered frictionless and remote, may also become civically invisible. Democratic oversight depends on consequence—and when warfare is managed through silent systems and distant screens, that consequence becomes harder to feel. A nation that no longer confronts the human cost of its defense decisions risks sliding into apathy. Autonomy may bring tactical superiority, but also ethical drift.

Throughout, the article avoids hysteria, opting instead for measured reflection. Its central moral question is timeless: Can conscience survive velocity? In wars of machines, will there still be room for the deliberation that defines democratic life?

The Republic in the Mirror: A Final Reflection

The closing argument is not tactical, but philosophical. Readiness, the essay insists, must be measured not just by stockpiles or software, but by the moral posture of a society—its ability to govern the tools it creates. Military power divorced from democratic deliberation is not strength, but fragility. Supremacy must be earned anew, through foresight, imagination, and accountability.

The challenge ahead is not just to match adversaries in drones or data, but to uphold the principles that give those tools meaning. Institutions must be built to respond, but also to reflect. Weapons must be precise—but judgment must be present. The republic’s defense must operate at the speed of code while staying rooted in the values of a self-governing people.

The author leaves us with a final provocation: The future will not wait for consensus—but neither can it be left to systems that have forgotten how to ask questions. In this, his work becomes less a study in strategy than a meditation on civic responsibility. The real arsenal is not material—it is ethical. And readiness begins not in the factories of drones, but in the minds that decide when and why to use them.

THIS ESSAY REVIEW WAS WRITTEN BY AI AND EDITED BY INTELLICUREAN.

Cover Previews: Nature Magazine – December 2

Volume 600 Issue 7887, 2 December 2021

Science: Endometriosis Insights, Deep Learning That Predicts RNA Folding

News Intern Rachel Fritts talks with host Sarah Crespi about a new way to think about endometriosis—a painful condition found in one in 10 women in which tissue that normally lines the uterus grows on the outside of the uterus and can bind to other organs.

Next, Raphael Townshend, founder and CEO of Atomic AI, talks about predicting RNA folding using deep learning—a machine learning approach that relies on very few examples and limited data.

Finally, in this month’s edition of our limited series on race and science, guest host and journalist Angela Saini is joined by author Lundy Braun, professor of pathology and laboratory medicine and Africana studies at Brown University, to discuss her book: Breathing Race into the Machine: The Surprising Career of the Spirometer from Plantation to Genetics.

Studies: ‘Coffee’ – Machine Learning Review Shows Benefits Of Drinking It

“It may be good for you,” says Dariush Mozaffarian, dean of the Friedman School of Nutrition Science and Policy at Tufts University. “I think we can say with good certainty it’s not bad for you.” (Additives are another story.)

After the link appeared between coffee intake and a reduced risk of heart failure in the Framingham data, Kao confirmed the result by using the algorithm to correctly predict the relationship between coffee intake and heart failure in two other respected data sets. Kosorok describes the approach as “thoughtful” and says that it “seems like pretty good evidence.”

Should you drink coffee? If so, how much? These seem like questions that a society able to create vaccines for a new respiratory virus within a year should have no trouble answering. And yet the scientific literature on coffee illustrates a frustration that readers, not to mention plenty of researchers, have with nutrition studies: The conclusions are always changing, and they frequently contradict one another.

Read full article in the New York Times

Research: New ‘Smart Cell Therapies’ To Treat Cancer

Finding medicines that can kill cancer cells while leaving normal tissue unscathed is a Holy Grail of oncology research. In two new papers, scientists at UC San Francisco and Princeton University present complementary strategies to crack this problem with “smart” cell therapies—living medicines that remain inert unless triggered by combinations of proteins that only ever appear together in cancer cells.

Biological aspects of this general approach have been explored for several years in the laboratory of Wendell Lim, PhD, and colleagues in the UCSF Cell Design Initiative and National Cancer Institute– sponsored Center for Synthetic Immunology. But the new work adds a powerful new dimension to this work by combining cutting-edge therapeutic cell engineering with advanced computational methods.

For one paper, published September 23, 2020 in Cell Systems, members of Lim’s lab joined forces with the research group of computer scientist Olga G. Troyanskaya, PhD, of Princeton’s Lewis-Sigler Institute for Integrative Genomics and the Simons Foundation’s Flatiron Institute.

Using a machine learning approach, the team analyzed massive databases of thousands of proteins found in both cancer and normal cells. They then combed through millions of possible protein combinations to assemble a catalog of combinations that could be used to precisely target only cancer cells while leaving normal ones alone. In another paper, published in Science on November 27, 2020, Lim and colleagues then showed how this computationally derived protein data could be put to use to drive the design of effective and highly selective cell therapies for cancer.

“Currently, most cancer treatments, including CAR T cells, are told ‘block this,’ or ‘kill this,’” said Lim, also professor and chair of cellular and molecular pharmacology and a member of the UCSF Helen Diller Family Comprehensive Cancer Center. “We want to increase the nuance and sophistication of the decisions that a therapeutic cell makes.”

Over the past decade, chimeric antigen receptor (CAR) T cells have been in the spotlight as a powerful way to treat cancer. In CAR T cell therapy, immune system cells are taken from a patient’s blood, and manipulated in the laboratory to express a specific receptor that will recognize a very particular marker, or antigen, on cancer cells. While scientists have shown that CAR T cells can be quite effective, and sometimes curative, in blood cancers such as leukemia and lymphoma, so far the method hasn’t worked well in solid tumors, such as cancers of the breast, lung, or liver.

Cells in these solid cancers often share antigens with normal cells found in other tissues, which poses the risk that CAR T cells could have off-target effects by targeting healthy organs. Also, solid tumors also often create suppressive microenvironments that limit the efficacy of CAR T cells. For Lim, cells are akin to molecular computers that can sense their environment and then integrate that information to make decisions. Since solid tumors are more complex than blood cancers, “you have to make a more complex product” to fight them, he said.

Digital Health: Wearable Sensor Data Can Predict Heart Failure 6 Days Before Hospitalization

From a “Circulation: Heart Failure” Journal study (Feb 25, 2020):

Circulation Heart Failure logoThe study shows that wearable sensors coupled with machine learning analytics have predictive accuracy comparable to implanted devices.

We demonstrate that machine learning analytics using data from a wearable sensor can accurately predict hospitalization for heart failure exacerbation…at a median time of 6.5 days before the admission.

Heart failure (HF) is a major public health problem affecting >23 million patients worldwide. Hospitalization costs for HF represent 80% of costs attributed to HF care. Thus, accurate and timely detection of worsening HF could allow for interventions aimed at reducing the risk of HF admission.

Data collected by the sensor are streamed to a phone and then encrypted and uploaded to a cloud analytics platform.
Data collected by the sensor are streamed to a phone and then encrypted and uploaded to a cloud analytics platform.

Several such approaches have been tested. Tracking of daily weight, as recommended by current HF guidelines, did not lead to reduction of the risk of HF hospitalization, most likely because the weight gain is a contemporaneous or lagging indicator rather than a leading event. Interventions based on intrathoracic impedance monitoring also did not result in reduction of readmission risk. The results suggest that physiological parameters other than weight or intrathoracic impedance in isolation may be needed to detect HF decompensation in a timely manner. In fact, 28% reduction of rehospitalization rates has been shown with interventions based on pulmonary artery hemodynamic monitoring. More recently, in the MultiSENSE study (Multisensor Chronic Evaluation in Ambulatory HF Patients), an algorithm based on physiological data from sensors in the implantable cardiac resynchronization therapy defibrillators, was shown to have 70% sensitivity in predicting the risk of HF hospitalization or outpatient visit with intravenous therapies for worsening of HF.

Read full study

Top New Science Podcasts: Better Battery Charging, Understanding Mice & Electricity From Thin Air

Nature PodcastsThis week, machine learning helps batteries charge faster, and using bacterial nanowires to generate electricity from thin air.

In this episode:

00:46 Better battery charging

A machine learning algorithm reveals how to quickly charge batteries without damaging them. Research Article: Attia et al.

07:12 Research Highlights

Deciphering mouse chit-chat, and strengthening soy glue. Research Highlight: The ‘silent’ language of mice is decoded at last; Research Article: Gu et al.

09:21 Harnessing humidity

A new device produces electricity using water in the air. Research Article: Liu et al.

16:30 News Chat

Coronavirus outbreak updates, the global push to conserve biodiversity, and radar reveals secrets in an ancient Egyptian tomb. News: Coronavirus: latest news on spreading infection; News: China takes centre stage in global biodiversity push

Podcasts: “LabGenius” CEO James Field On AI/Machine Learning Discovering New Medicines (Babbage)

Babbage PodcastsResearchers are using artificial intelligence techniques to invent medicines and materials—but in the process are they upending the scientific method itself? The AI approach is a form of trial-and-error at scale, or “radical empiricism”. But does AI-driven science uncover new answers that humans cannot understand? Host Kenneth Cukier finds out with James Field of LabGenius…

Website: https://www.economist.com/podcasts/2019/11/27/the-end-of-the-scientific-method