AI weapons can attack with increased speed and precision than the existing systems. But is AI smart enough to operate without human intervention, identify targets, and ethically decide whom to kill and whom not to?
Weapons have been used since the Stone Age by man for various purposes. Man learned very quickly that he wasn’t strong enough to perform certain tasks. He realized that he was vulnerable and needed additional tools for defending himself against danger, be it from an animal or another man. Hence began the quest for manufacturing weapons. In addition to using weapons as a tool for self-defense, humans also used them for chores like hunting and building shelter by chopping trees. Different tools were made for different purposes. Since then, weapons have evolved continuously and rapidly with our changing needs. Today, weaponry has evolved from simple stone knives and metal blades to complicated weapons like biological and nuclear weapons whose effects stay for generations after they are used.
The world has seen numerous wars that have brought about a revolution in the weapon industry. The effects of these wars, due to the weapons used in them, still remain. Even though just so many countries are signing peace treaties, no country has ever stopped manufacturing new weapons and developing new techniques to do so. With every advancement in technology, weapons have also been modified by adopting those technologies, becoming more effective and efficient. Currently trending, AI has been finding its applications in almost all other fields, and weapons are no exception. Once a research is published, it is impossible to control how it will be used. Thus, AI weapons have become the latest addition to the defense industry.
AI weapons: who is using them?
While many AI weapons are still being developed or debated over, there are a few countries who have already implemented AI and are successfully using those systems. South Korea has installed sentry guns on its side of the demilitarized zone (DMZ) to fire autonomously on targets. Also, Germany is using surface-to-air (SAM) systems like the Patriot and MANTIS, which are entirely automated to shoot down enemy missiles.
Ethics: can machines be ethical?
Since ethics don’t allow even human beings to kill each other, will it be right to hand over the right to kill autonomously to a machine? A machine, which is, after all, programmed by a human being. What if this autonomous system commits a war crime? Or let’s say, what if it makes an erroneous decision? A wrong decision may cost lives and the death toll can be anything from a single person to millions. And most importantly, who is to be blamed for the mistakes of these machines? Who should be held responsible for the actions of these machines?
Pros: aren’t AI weapons better?
According to countries that advocate and are developing AI weapons, technologies like these would help to make military actions more precise, minimizing the collateral damage. The number of civilian lives lost due to war could be substantially decreased. Also, the lives of many soldiers can be saved since the weapons would eliminate the need for human beings entering the battlefield.
Speed: how fast?
One of the reasons why autonomous weapons are deemed desirable is speed. Humans have limited conscious processing capability for snap decisions. Also, a human being can make errors or be biased while taking decisions. In the case of machines, the occurrence of errors like these is almost next to impossible. Quick decisions are of utmost importance in warfare and machines are easily better than humans in quick decision making, especially when a lot of information needs to be processed.
Stealth: how stealthy?
An additional reason as to why AI weapons are supported is stealth. Currently, any unmanned system, which is remotely controlled, needs to be continuously linked to the command and control center. But, the problem with this is that these links are easy to detect and thus, they make the system traceable. If a machine is autonomous, it will be capable enough to enter the target zone and perform its mission, since it doesn’t need any commands. This makes it harder for the enemy to detect such a system.
Swarms: how many?
The major reason why countries that are technologically advanced are interested in autonomy is that AI will allow the military capacities to grow exponentially. Typically, to control each unmanned system, a human controller is needed. Owing to this, only a limited number of unmanned systems can be used together, as each controller should be in perfect sync with the others. Maintaining perfect sync between a large number of members is almost impossible for us. However, an AI system has the capability to control many unmanned weapons simultaneously. An example of this concept is to form a swarm of drones. Intel holds the record for the highest number of drones controlled and flown simultaneously by a single computer.
Identification: can an AI weapon distinguish between a Militant and a Civilian?
But will AI be smart enough to distinguish between a target and a hostage? Many air defense systems already have significant autonomy in the target identification mechanism. Military aircraft are already highly loaded with many automated features to assist the pilots on missions and help save their valuable time in critical moments during wars. Features like heat-seeking, target identification, and auto-aiming help a lot in saving valuable time and to take quick actions. But, what if any of these features malfunction? We have already heard of incidents in Afghanistan, Pakistan, and Syria where drones have killed both, innocent civilians and the guilty combatants during an airstrike. Similarly, in combat, a drone operator squinting his eyes on the screen may struggle to tell whether the people on the ground are terrorists carrying arms or farmers carrying spades and farming tools. There are many tests showing how these technologies can be easily disoriented. Theoretically, it is almost impossible to distinguish a man from being a civilian or a combatant by an AI system. So will AI be able to classify a target as a friend or foe? Will AI also decrease the collateral damage? A human can or may prioritize keeping the collateral damage low or even negligible, if possible. But, will a machine understand what to prioritize?
Opinions: what do the experts say?
A number of inventors and tech experts are against robots gaining autonomy and argue that these robots can’t be ethical or follow humanitarian laws at all. Researchers from all over the world are coming together to discuss and decide what should be done in this situation. Elon Musk, along with other AI pioneers, had written an open letter to the UN to ban the manufacturing of autonomous weapons. In the letter, the experts warned that the development of such weapons will simply start a new arms race. Researchers also signed a pledge not to work on robots that can attack without human oversight. But rather than implementing a ban on a technology or pledging not to use something that has already been created, experts must understand that the only practical way to deal with this situation is to form an ethical framework or a set of laws to abide by. The UN along with experts from the field of AI and Human Rights Watch, or other similar organizations, can form a set of laws and regulations for the creation and use of AI weapons. While some agree that AI should be further developed, there are many who are baffled as to how to agree or even deal with such a situation.
While AI will ensure warfare with minimum collateral damage, minimizing civilian casualties, the question still remains – what if the technology goes out of control and turns rogue? There is still a lot to be thought about and we are still far away from implementing what can be called as the perfect AI weapon, but only the addressal of these difficult questions will pave the way forward.