Forecasting Conflict

The Global Ethics of Artificial Intelligence in Predicting War

Artificial intelligence is no longer an abstract frontier in conflict studies as it is now being deployed in real forecasting systems that promise to anticipate hotspots before violence breaks out. Tools like VIEWS (Violence & Impacts Early-Warning System) issue monthly forecasts across the globe, integrating data on protests, fatalities, populations, climate, and economics (VIEWS generates predictions up to 36 months ahead) Views Forecasting+1. Meanwhile, the World Food Programme’s (WFP’s) “Conflict Forecast” platform uses machine learning and natural language processing to map protests, riots, and conflict events in subnational settings for planning and humanitarian allocation WFP Innovation. The world is entering an era where predictions of instability compete with diplomatic and early-action mandates.

But whose forecasts matter, with what authority, and under what ethical guardrails? To take these systems seriously not as hype but as policy tools we must scrutinize not only their accuracy, but their governance, political use, and moral stakes.

Evidence & Accuracy: Promise, but with Limits

One of the clearest empirical landmarks is the VIEWS Prediction Challenge 2023/24, which invited research teams to forecast the number of fatalities in state-based conflict by producing probabilistic distributions (not just point estimates) to better capture uncertainty Views Forecasting+2arXiv+2. The shift to probabilistic forecasts matters: when conflict is latent or nascent, low-probability but high-impact risks must not be ignored.

Technical advances also illustrate both power and limitation. A recent model combining convolutional neural networks with long short-term memory (ConvLSTM) outperformed the baseline models in forecasting changes in battle-related deaths, but struggled with violence escalation and tended to overpredict spatial spread under certain conditions vics.lab.ufl.edu. Another frontier work, Next-Generation Conflict Forecasting: Unleashing Predictive Patterns through Spatiotemporal Learning, presents an architecture delivering forecasts in multiple conflict types (state-based, non-state, one-sided) with uncertainty estimates, pushing resolution to subnational grids up to 36 months ahead arXiv. Yet even in that work, the authors caution against overconfidence these are forecasts, not certainties.

Critics also show that in evaluation metrics, simpler “no-change” models can sometimes outperform complex ones under particular scoring schemes. For instance, the TADDA metric (Targeted Absolute Deviation with Direction Augmentation) penalizes wrong directionness (escalation vs de-escalation), and with it, advanced models, in some settings, lose to a default “no-change” baseline arXiv. In short: forecasting systems can be impressive, but they are fragile, sensitive to scoring, data quality, and conceptual framing.

Governance, Control & Political Risks

Forecasting is not just a scientific endeavor; it is political. The act of labeling a region “high-risk” endows power over resources, attention and intervention. If that tool is closed, opaque, or controlled by narrow interests, it becomes a mechanism of surveillance rather than prevention. In many AI surveillance critiques, predictive policing features prominently. A paper “Predictive Policing or Predictive Prejudice?” argues that predictive policing systems magnify bias: if data is already skewed, the algorithm perpetuates stereotypes and policing becomes preemptive, not reactive OxJournal. The logic that you label an area a hotspot and then police it accordingly mirrors how conflict forecasting could be used to justify repression rather than mediation. Autocratic regimes already weaponize AI. In How Autocrats Weaponize AI And How to Fight Back, authors document how states use surveillance, facial recognition, predictive tools, and online profiling to monitor and suppress dissent, especially in authoritarian contexts Journal of Democracy. A forecast of unrest in a given region might be coopted by security forces to preemptively clamp down on protests under the cover of “preventive action.” The International Committee of the Red Cross (ICRC) warns that AI in warfare and humanitarian action must preserve human judgment. It stresses that AI systems should support but never replace human agency, especially in decisions that affect life and death International Review of the Red Cross. The tension is clear: forecasting tools offer predictive capacity; but liberal norms require human oversight and accountability.

Moral & Conceptual Tensions

Forecasting reframes conflict as data as probability surfaces on maps. But violence is always rooted in agency, history, inequality, power struggles, identity, and moral failures. No algorithm can encode the legacy of colonialism, the betrayals of peace agreements, or the sudden rupture of trust. Reducing these to input features risks flattening complexity.

Moreover, the more we come to “expect” conflict through data, the more human surprise and shock are domesticated. War becomes forecastable, perhaps banal, rather than tragic. As scholars like Max Murphy, Ezra Sharpe, and Kayla Huang argue, we should treat predictive conflict models cautiously as their value lies in heuristic insight, not techno-fetishism ResearchGate.

From Prediction to Prevention: The Hard Gap

Forecasting without response is forecasting in vain. Many early warning systems fail because alerts don’t trigger credible action. Countries receive warnings but lack capacity, political will, or coordination to intervene.

Even within the UN system, integrating predictive tools is far from straightforward. Eduardo Albrecht’s UNU discussion warns that predictive technologies often run against uneven data quality, political resistance, and automation bias (where decision makers over-rely on algorithmic suggestions) United Nations University.

Practically, a system like WFP’s Conflict Forecast may flag rising protest risk in a region, but whether humanitarian agencies, national governments, or regional bodies can act is a separate question. The forecast is only an input not a guarantee of action.

Given the empirical and normative landscape, my core position is this: conflict forecasting by AI is no silver bullet, but it is a useful tool if (and only if) we embed it within democratic governance, local legitimacy, and accountability frameworks.

1. Forecasts must be open and auditable. Tools such as VIEWS publish code and datasets for peer review, which is a model to emulate GitHub+1.

2. Human-in-the-loop decision making is non-negotiable. Predictions should inform deliberation, not replace it.

3. Ethical limits on use must bind forecasts. They should not be used to justify repression or surveillance of marginalized communities.

4. Localized co-design and contextual validation: forecasting models must not be monolithic global templates. A model may struggle when transplanted from Congo to Nepal without retraining and local adaptation.

5. Precautionary deployment: pilots should be carefully tested, error rates disclosed, and negative side effects monitored (false positives, overreaction, stigmatization).

Conclusion

AI-driven conflict forecasting offers a tantalizing possibility: to see violence before it ignites. But it also presents a deep moral danger: to replace political judgment with computational authority. So long as forecasting remains divorced from action, transparency, and accountability, it risks becoming a tool of power rather than protection.

Forecasting war cannot guarantee peace. But used with humility, it can sharpen our vigilance, prompt timely mediation, and guide resources. The real test is whether society can retain moral agency in the face of predictive power.