Over the last two decades, climate change has contributed to instability around the world—bolstering the ranks of terror groups, sparking territorial disputes, and triggering mass migration. It’s no wonder that militaries and international security organizations increasingly see climate change as a formidable national security threat, and that they have in turn taken on greater roles in fighting it.
In recent years, new technology has transformed the tools available to mitigate the effects of climate change. Specifically, artificial intelligence (AI) has quickly become a widespread instrument in the climate fight—even as it consumes exceptional amounts of energy and water itself. Today, militaries and nonstate actors, including the United Nations, are leveraging AI to foretell climate-related disasters, optimize energy use, and monitor ecological degradation.
As dual-use technology, however, the AI tools that militaries use for climate purposes have an inherent overlap with those used for morally dubious purposes, such as combat targeting and surveillance. Militaristic applications for AI have generally enjoyed a steady flow of funding, and now that the technology has progressed, it is being adapted for purposes beyond war zones.
Experts say there isn’t a way to disentangle this knot. Instead, there must be political will to regulate the malevolent uses of AI and boost the benevolent ones. “I think that both aspects of applying AI technology to military problems and climate problems are not straightforwardly good or bad, and much depends upon the specific problem and the way the technology is deployed,” Peter Asaro, a professor at the New School, said.
“AI, like any new technology, is a tool, and as a tool it can be used for good or bad, to harm people or to empower them,” said Benjamin Sovacool, a professor at Boston University and the University of Sussex. “Whether their uses are virtuous or viceful, righteous or reckless, depends on context, and how they are used.”
The U.S. military, which itself has a significant environmental impact, has had to get smarter on climate issues in recent years as they begin to threaten some of its core interests. For example, as natural disasters become more frequent and intense, AI has been transformative for the U.S. military’s relief efforts. When Hurricane Helene hit the southeastern United States last year, the U.S. military deployed its Maven Smart System initiative on the homefront.
Designed by several big technology and defense companies—namely Palantir Technologies—primarily for the U.S. Department of Defense, the Maven Smart System uses AI algorithms to identify potential combat targets, analyzing data from enormous intelligence feeds, including satellite imagery. The U.S. military used Maven to manage personnel, logistics, and threats during the 2021 Kabul airlift; to identify locations of Russian equipment following the full-scale invasion of Ukraine; and, more recently, to pinpoint airstrikes targeting militias’ weapons depots in Iraq and Syria and to locate rocket launchers in Yemen and surface vessels on the Red Sea.
But after Hurricane Helene, Maven was used to help the U.S. Army map road closures and cellular outages and streamline relief efforts, such as by identifying areas lacking medical supplies or calculating how many truckloads of water an area needed. Responders no longer had to sift through spreadsheets to find critical data, as Maven automatically extracted and highlighted it for them. It was the first time that the tool was used to address a hurricane.
Beyond disaster relief, the U.S. military is reckoning with how climate change is making its bases and strategic sites vulnerable. At home, wildfires have ravaged facilities in California, and hurricanes have torn apart military bases in the southeast. In 2018, Hurricane Michael destroyed most buildings at Tyndall Air Force Base in Florida and damaged several aircraft. Today, the multibillion-dollar project to rebuild the base is yet to be completed.
Many domestic military facilities are “at risk of breakdown” as sea levels rise or as warming temperatures melt permafrost in the Arctic, defense analyst Albert Palazzo said. Overseas bases are threatened, too. Take Diego Garcia, an island in the Indian Ocean that serves as a key forward base for both the U.S. and British militaries. It has only 10 square miles of dry land, meaning that a sea-level rise of just a few feet could force soldiers to relocate. Similar problems exist for U.S. bases in Bahrain, Djibouti, and Guam.
“Eventually, the U.S. and other militaries will have to close some of these more exposed bases,” Palazzo said. “The loss of these facilities will make it harder for the United States to dispatch military power globally and to conduct and sustain operations.”
The Pentagon has long employed technology to assess what it calls climate hazards—for instance, the Climate Assessment Tool, a geospatial tool used to assess climate change exposure at domestic and overseas bases. Now, the U.S. military is bringing AI into the fold to augment its ability to forecast extreme and unpredictable events. For instance, it uses the READI Toolkit from Charles River Analytics, which employs AI to model risks from floods, fires, and hurricanes. The Air Force’s Earth Intelligence Engine also flags climate vulnerabilities across its installations.
It’s unclear whether U.S. President Donald Trump will slash climate funding for military-related AI projects, as he has done with research grants for the National Science Foundation and funding for the National Climate Assessment. It’s possible that, without explicitly acknowledging climate change, the administration could spare the military’s climate-related projects, Sherri Goodman, a former senior Pentagon official, said. But even that remains to be seen. (The U.S. Department of Defense did not respond to requests for comment.)
Climate change also weighs on U.S. counterterrorism objectives, as its effects can trigger societal unrest and even act as propellants for extremism. In defense circles, climate change is referred to as a “petri dish for terrorism,” Goodman said. “In fragile parts of the world, especially in Latin America, the Middle East, and North Africa, disrupted food and water systems make local populations more vulnerable to militant groups.” In other words, joining armed groups can be less an ideological choice than a last resort.
In Syria, a historic drought in 2007 crushed the agriculture sector and forced masses of farmers into cities. Unemployment, food insecurity, and social tension followed—conditions that the Islamic State and other groups later exploited for recruitment during Syria’s civil war. Similar dynamics are unfolding across the Sahel, where groups such as Boko Haram capitalize on environmental degradation, recruiting disillusioned people with promises of income.
This is where AI can be useful to the U.S. military, as well as nonstate actors seeking to mitigate the global threat of failed states and the proliferation of militant groups.
Somalia—where climate change has warped rainfall, decimated the pastoral economy, and fueled piracy and recruitment for the militant organization al-Shabab—is a good case study. There, the United Nations has partnered with the company Omdena to bring AI into the fight against climate challenges. One model analyzed satellite imagery to predict environmental degradation, which is linked to forced displacement, giving aid groups the chance to intervene before people had to migrate to overcrowded urban centers.
Another initiative in this partnership focused on Somali agriculture, developing a pest management system that uses remote sensing and computer vision to help farmers spot and fight infestation faster. In one project, a team of 34 data scientists built machine-learning models to predict drought-driven food displacement. The team’s AI system hit 99 percent accuracy in identifying areas at risk of ecological crises—and, potentially, of displacement and conflict.
AI has already proved to be a useful tool for monitoring things like deforestation and drought. But when applied in other military contexts, ethical lines can be blurred. Many experts fear that without strict regulations, the technology’s upsides in a climate context will be overshadowed by its downsides elsewhere—especially with regard to mass surveillance, profiling, biometric identification, predictive policing, autonomous weaponry, and targeted killing.
The same technology that can help the military forecast and adapt to climate issues can also be an effective weapon, depending on who is using it and for what purpose. Some countries have started using AI language models for climate data analysis, such as forecasting extreme weather events or modeling glacial retreat. But the Israel Defense Forces, for instance, reportedly used technical structures similar to certain GPT models to build an AI model trained on Palestinians’ private communications.
Lucy Suchman, a professor at Lancaster University, worries about the starkly different outcomes of using the same or similar technology to track environmental phenomena versus deploying it against human beings. “In contrast to glaciers, categories like ‘terrorist,’ or ‘Hamas militant,’ or even worse, ‘Hamas affiliate,’ (for example) are based not on deeply informed understanding but rather on often very crude forms of profiling and guilt by association,” Suchman said.
Maven is another example. At first, the project was meant to boost image recognition for combat in the Middle East and Central Asia. Now, it’s being used for environmental monitoring and disaster response. This overlap nods to the uncomfortable porosity between the boundaries of war and climate AI applications—that there really isn’t a way to distinguish between the technology that powers them.
“Again, it is not AI or the specific form of AI models that are good or bad,” Asaro said. “What matters is how it is applied and used, who is using it, and how we regulate its use, ensure transparency and accountability, and that the systems are actually advancing the values they were built for.”
