/

thumbnail text

From her office on Helsinki’s South Harbour, Johanna Poutanen can see the Presidential Palace, where Donald Trump and Vladimir Putin met for talks in 2018. Finland’s capital was also the favoured city for Cold War summits between US presidents and their Soviet counterparts and it remains popular with peacemakers thanks to the country’s history of neutrality and reputation for diplomacy. But in her role as head of digital peacemaking at the Crisis Management Initiative (cmi) – also known as the Martti Ahtisaari Peace Foundation – Poutanen says that she wants to bolster Finland’s diplomatic savvy with innovative solutions for resolving conflicts. “Digital tools can help us analyse, visualise and present data to enhance understanding in mediation contexts, reach stakeholders who are difficult to engage in peace negotiations and disseminate critical information more effectively,” says Poutanen, who has just concluded a tender process for providers of “peace tech”, technology designed to help prevent or end conflict.

gettyimages-1236226582.jpg
A bomb-disposal robot at the US embassy in Nairobi

cmi has already used the AI-powered analytics tool Remesh to understand women’s priorities for peace talks in Sudan. In Yemen it used the Inclus platform to build consensus between political groups by visualising their agreements and disagreements. Elsewhere, Project Didi, which is based in Israel, uses “ripeness theory” to explain why parties in peace negotiations might be hostile to a proposed agreement at first but later agree to the very same terms. The Human Rights Data Analysis Group in San Francisco documents human-rights violations and verifies casualty data with machine learning. Peace Geeks in Vancouver operates a messaging platform for Ugandan victims of war crimes.

The trouble is that large data sets and data-driven tools for making peace can easily be turned against the people who they are aiming to protect. Take “big data” firm Palantir, which uses AI to analyse satellite images, open-source data, drone footage and on-the-ground reports. Lauded for helping to clear landmines and resettle refugees in Ukraine, Palantir, part-funded by the cia’s venture-capital arm, is contracted to use the same technologies to supply information on targets for military forces. In the wrong hands, such tools could be exploited and used to attack vulnerable populations or manipulate peace negotiations.

We need strong regulations for the safe, ethical and moral use of peace tech. “Data protection and security are an absolute priority,” says Tim Epple, managing director of Edinburgh University’s PeaceRep initiative. “The challenge is preventing dual use of peace tech. Imagine, for example, if data collected on the ethnicity of respondents in a conflict zone gets into the hands of nefarious actors.”

Policing peace tech, which is often deployed in fragile states where governance is weak and accountability non- existent, won’t be easy. Can individual nations be trusted any more than big business or entrepreneurs to use technology for peaceful purposes? Of course, we’ll need top-down oversight by global institutions, such as the UN, itself a significant peace-tech developer, and a legally binding international treaty that expands the jurisdiction of war-crimes tribunals to cover the unethical use of peace tech.

But we should also use technology such as blockchain to track and publicly record the development and sale of any tech that could be misused, and mandate that all peace-tech systems have a “kill switch” if they’re found to be used for warfare. An international team of digital peacekeepers and cyber experts must be empowered to intervene, neutralising threats of peace-tech misuse in conflict situations. Ethical hackers swapping red hats for blue berets might be our best bet for ensuring that technology makes peace, not war. — L

Share on:

X

Facebook

LinkedIn

LINE

Email

Go back: Contents
Next:

Other