top of page
Search

How AI Can Help Defense Organizations Become Smarter, Simpler and Stronger

Updated: Nov 14, 2019

Artificial intelligence (AI) is one of the most disruptive technologies of our generation. And it’s affecting the defense sector as much as any other. 


Media coverage of this disruption tends to focus on the more contentious ways in which organizations could use AI to get ahead. (For example, by developing autonomous robots and other intelligent weapons that could fight future wars with little or no human input.) 

International concern has strengthened this focus: in July 2018, more than 2,000 researchers from 36 countries signed a pledge not to develop lethal autonomous weapons.


In the same year, Google instigated a ban on developing AI software that could be used to make weapons. The move came after several employees resigned over the company’s contract with the US Department of Defense to analyze remotely piloted aircraft videos.  

But this attention on autonomous weapons detracts from the main benefit AI brings to defense organizations: the ability to analyze data too vast for humans. By producing insights from that data, AI helps defense leaders to make faster, more accurate decisions across all their operations – not just in combat situations. So it’s emerging as a base technology for augmenting, not replacing, human intelligence. 



Defense organizations are locked in a global AI race 



Around the world, defense organizations are scrambling to position themselves as leaders in AI. The US Department of Defense’s third offset strategy focuses on developing emerging technologies, of which AI is an important aspect. (The Pentagon’s research wing has allocated US$2 billion for it in 2018-23.) China plans to be a global AI leader by 2030, and its military is already partnering with research institutes to develop AI-related projects. The Russian military has also shown strong commitment to developing and deploying such projects. 


But while the applications of AI in defense grow and develop each day, most are still in the design, testing or evaluation stage. Here, we’ve highlighted six main areas where using AI could make defense organizations smarter, simpler and stronger.


  1. Using algorithms to process information for faster, more accurate decision- making. Defense organisations collect surveillance data from sources as wide as social media, satellites, remotely piloted aircraft, the website of opponent countries and sensors connected to military vehicles. AI improves their ability to analyze this data, so they can make faster decisions and operate more quickly. To digitize military activities on a large scale, though, defense organizations will need to protect the information on their servers and web portals. AI could help these systems to learn for themselves. It could also help organizations get better at spotting cyber breaches: in the US, MIT’s Computer Science and Artificial Intelligence Laboratory has developed an AI platform that can detect 85% of cyber-attacks and reduce false alarms.

  2. Strengthening existing weapon systems by making them autonomous. AI could lead to machines that automatically move, detect and destroy their targets, making militaries dramatically more effective. AI-enabled machines could also expand the battlefield by entering areas that wouldn’t be accessible to humans. Israel has deployed autonomous military vehicles near the Gaza border for patrols and to identify threats. And China is developing autonomous submarines, which it expects to deploy in the 2020s.

  3. Allocating and planning manpower automatically. AI could combine information on soldiers’ capacity and past mission performance, then use it to fully assess their strengths and weaknesses. This would allow agencies to match people to missions more effectively. The British Army has been using business intelligence software and analytical tools to simplify and align the vast amounts of different manpower data they hold. By understanding that data better, they’ve been able to make more informed decisions about how to allocate manpower. And they’ve avoided £770 million in wasted expenditure.

  4. Carrying out training and simulating warfare. Militaries could incorporate AI into their training programs to create realistic simulations that prepare trainees for actual warfare. This could include modifying training scenarios in real time, to reflect trainees’ level of ability. For example, the head of training for the US Air Force plans to use AI to watch trainee fighter pilots as they practice manoeuvres in a simulator. The AI system will learn from the trainee’s actions. It will then give real-time feedback that suits that person’s particular learning style, so they learn faster and better.

  5. Building effective logistics and transportation networks. AI could help militaries to get the right troops, goods, ammunition and weapons to the right place, at the right time – more cheaply and with less human effort. In doing so, it could help to shift operations from reactive to proactive, planning from forecast to prediction and enabling services from standardized to personalized. For example, a US start-up is testing using AI to predict when parts in the US Army’s vehicles might fail, to prevent battlefield breakdowns.

  6. Improving battlefield medicine. Integrating AI into robotic surgical systems and robotic ground platforms could help to reduce deaths on the battlefield, as well as extract casualties. It could also help to make sure military health care workers have the skills they need. The US Department of Defense is working with the University of North Carolina to develop an analytical tool that evaluates patient data. The aim is to predict the type of care military health care workers should provide under different scenarios. 


The dangers of deploying AI in defense


As the previous examples show, AI is the future of defense. But there are real risks attached to using it in the sector. These include:


Ethics. Concern is growing about the implications of delegating life-or-death decision-making to machines. AI may not be able to distinguish between a civilian and a combatant, for example, which could lead to unintended casualties. It’d also be difficult to program AI for every contingency, making its responses hard to predict. And pitting AI systems against each other could generate complex environments to which they might struggle to adapt.


Security. The consequences of an adversary hacking into a country’s AI systems could be deadly. These could include remotely piloted aircraft dropping bombs on civilian sites or autonomous weapons killing innocent people.


Predictability and reliability. An AI-equipped machine makes decisions based on the complex algorithm developers have programmed into it. If those decisions are flawed, it’s hard to know if that’s due to errors or bias in the inputs, or because the machine took the decision based on analysis. Either way, not being able to predict how an AI-equipped machine may react would make it difficult for the command center to implement a strategy. It would also be hard to hold anyone responsible for a decision an AI machine made without human input.

Finally, AI is only as good as the data people give it. But as it’s very difficult to remove bias completely from that data, the systems it fuels may not be reliable.



Making the most of AI in defense while minimizing the risks


While the outcomes of these risks are specific to defense, the risks themselves could apply to the use of AI in any context. That’s why we’ve dedicated a separate paper in this series to developing trusted AI. 


But while militaries may invest in security, reliability and predictability, the edge that AI-equipped machines could give them may mean they cross the ethical line. No national government could stop another from doing this. But they could mitigate some of the risk – for example, by coming to a global agreement on how to develop and apply autonomous and semi-autonomous weapon systems. They could also continue to keep a human in the loop between sensors and shooters, so that targeting decisions always involve human judgement. 


Meanwhile, individual governments could help to develop AI in defense by:


  • Incentivizing research collaborations between top academic institutions and the military

  • Providing more grants for R&D in AI

  • Establishing a data protection framework with legal backing

  • Developing regulations that make damage-impact assessments mandatory at every stage of development and testing

  • Introducing AI courses in educational institutions to help upskill civilians and the military


Taking these steps will allow defense organizations to realize the ultimate benefit of AI: becoming smarter, simpler and stronger in the back office, as well as in combat situations.





© De Angelis & Associates 2019. All Rights Reserved.






0 comments

Recent Posts

See All
bottom of page