Introduction
The 2018 National Defense Strategy identified Artificial Intelligence (AI) as an emerging technology that will change the character of war. (Hoadley, D. and Sayler, M. 2019). Leaders within the Chinese Military believe AI “will lead to a profound military revolution.” (Kania, p. 8). In 2017 Vladimir Putin publicly stated that “the leader in this field will rule the world.” (Simonite, T. 2017). As nations the pursue AI, the full impact of the technology remains largely speculative. Technical limitations, security concerns, political and bureaucratic obstruction, moral opposition, and human’s willingness to trust machines in matters of life and death, all present obstacles to any nation wishing to pursue military applications for AI.
Definition
Order custom essay Argumentative Paper – Which Technological Advance will Most Influence Future Conflicts? with free plagiarism report
One obstacle to military AI applications, and establishing rules governing them has been defining what AI is. No official U.S. Government definition of AI currently exists. A definition from 1978 described AI as the automation of activities associated with human thought, like learning, problem solving and decision making. (Hoadley, D. and Sayler, M. 2019). The Aegis air and missile defense system employs complex algorithms and automation, but does not meet the criteria for AI . Though operators can set Aegis to fire automatically, it employs a narrow unchangeable set of algorithms. The ability to learn from experience, adapt, and self-improve is a defining characteristic of AI. (Freedberg, S. 2019). The development of a General AI capable of human like thought across a broad spectrum of activities is not imminent. However, Narrow AI focused on specialized tasks are already emerging. (Hoadley, D. and Sayler, M. 2019)
Potential
AI will likely affect “every aspect of warfighting, from movement to communication, logistics, intelligence, weapons, and people.” (Clifford 2019). AI promises near instantaneous responses to the enemy, perfectly coordinated operations, and the ability to overwhelm an enemy’s ability to react. Some claim AI will eliminate uncertainty onthe battlefield, and facilitate faster, better decision making. A report by the Joint Chiefs of Staff speculated that AI and autonomous systems will eventually remove humans from direct combat roles. It predicted a time when human involvement is limited to strategic decisions, while AI systems plan and execute tactical operations. (Joint Chief of Staff. 2016. p 18). In 2017, Viktor Bondarev, chairman of the Russian Federation Council’s Defense and Security Committee, similarly declared AI will replace soldiers on the battlefield, pilots in aircraft, and control vehicles. (Tucker, P. 2018). The game changing capabilities that AI promises, and interest from near peer competitors, has led some analysts to argue that development of autonomous military AI applications is a “tactical and strategic necessity,” and even a moral obligation. (Ryan, M. 2017).
Funding and Research
The availability of funding for AI research is not an obstacle to AI development. Though the sources of funding may provide some ancillary challenges. In 2015, Google, Apple, Facebook, IBM, Microsoft and Amazon invested $54 billion in research and development. In 2020, the Defense Department only has $4 billion for research and development into AI. (Clifford 2019). The commercial sector is driving the development of AI, while the military largely adapts dual use capabilities that emerge for military applications. The People’s Liberation Army (PLA) similarly relies on tech firms, start-ups, and universities.
US corporations pursue AI research within China, while Chinese AI experts participate in AI research within the US. US corporations emphasize that the research conducted is non-military only. However, Tsinghua University launched the Military-Civil Fusion National Defense Peak Technologies Laboratory specifically to pursue and exploit dual-use technologies, including AI, for the Peoples Liberation Army (PLA). In 2017, Xi Jinping added a requirement to the Communist Party’s Constitution, that all research done in China be shared with the PLA. (Thiel, 2019)
Drawing upon the expertise of experts regardless of nationality, and development of commercial AI applications, will undoubtedly speed the development of AI capabilities in both China and the US. However, it will also expose vulnerabilities to potential adversaries, and hinder the long-term preservation of any technological advantage that emerges.
Moral Objections
In 2017 Joint Chiefs of Staff General Paul Selva stated “I do not think it is reasonable for us to put robots in charge of whether or not we take a human life.” (Hoadley, D. and Sayler, M. 2019). Though no legal prohibitions exist, there is broad international opposition to the use of Lethal Autonomous Weapon Systems (LAWS). LAWS use sensors and algorithms to autonomously identify and destroy targets. LAWS would be able to function more rapidly, and in environments with degraded communications. These advantages may make the use of LAWS in wartime inevitable, particularly in conflicts with opponents with similar AI capabilities. Broader moral objections to the development of any military AI capability may also hinder development, and prompted Google to withdraw from an agreement with the Department of Defense (DoD).
Bureaucratic Obstacles
The DoD’s slow, acquisition process, and vested interests may hinder timely implementation of AI advancements within the military. The 2016 National Defense Authorization Act established a Section 809 Panel to address challenges with the acquisition process. In January 2019 the Panel reported the process’ “direct effect on warfighting capability in a defense era defined by technological edge,” and recommended changes to “keep pace with private-sector innovation.” (Clifford 2019).
Military Acceptance
The military’s willingness to embrace AI technology may also restrict AI adoption. (Hoadley, D. and Sayler, M. 2019). Services may be inclined to reject some AI applications “if the technology threatens service-favored hardware or missions.” (Hoadley, D. and Sayler, M. 2019). Poor expectation management is also a risk. Overhyped capabilities may “diminish people’s trust and reduce their willingness to use the system in the future.” (Hoadley, D. and Sayler, M. 2019). Safety is a real concern when rapidly integrating new technologies, and failures could stall, or slow AI adoption. If as a result, the military pursues only incremental improvements to existing systems and concepts, rather than potentially game-changing advancements, it may soon find itself outmatched by adversaries whose pursuit of AI capabilities is more bold.
Limits of Technology
Some experts believe that the current crop of algorithms will reach their full potential within 10 years, and that further AI development will not be possible without significant new leaps in technologies like quantum computing, and more efficient chips. Roadblocks have slowed progress in the past. (Hoadley, D. and Sayler, M. 2019).
Security and Vulnerabilities
Securing the supply chain for all components used in military AI applications would be even more vital, due to the greater risk of widespread, simultaneous, and catastrophic failure. This need is demonstrated by the Super Micro supply chain compromise, in which Chinese intelligence services directed a manufacturer to place malicious chips in hardware exported to the US over a two-year period ending in 2015. (Robertson, J. and Riley, M. 2018)
The development of AI for commercial purposes may reveal AI vulnerabilities in environments where failure would be less catastrophic, than if they were discovered for the first time in a military application. However hacker’s will have more access to those commercially available applications, and may be able to develop tools that affect related military applications also.
While humans are prone to error, their mistakes are typically made individually, and they tend to be different each time. An AI systems that fails through a programming error or external hacking, has “the potential to fail simultaneously and in the same way, producing large-scale or destructive effects.” (Scharre, P. 2016. p.23)
The proliferation of AI systems will increase the number of “hackable things,” including moving vehicles, and weapons. A successful hack could have lethal effects as a result. If an entire class of AI has the same vulnerability. (Hoadley, D. and Sayler, M. 2019)
If an enemy were to steal the plans for an aircraft, it would still take them years to develop one themselves. However, AI is software. If AI code is stolen or captured, it could be reproduced, and used by an enemy almost immediately. Hoadley, D. and Sayler, M. 2019). Since the code would be present in any system using it, the opportunities for capture would be significant. As a result, AI does not provide a long-lasting overmatch against any nation capable of exploiting captured code.
Dangers Posed by AI
AI involvement in warfare increases the likelihood of conflict, and hinders peaceful resolution. “The speed of AI systems may put the defender at an inherent disadvantage, creating an incentive to strike first against an adversary with like capability.” (Scharre, p. 26). Military AI in close proximity to adversary AI may also result in rapid accidental escalation. (Hoadley, D. and Sayler, M. 2019).
AI can enhance the effectiveness of less expensive military equipment, potentially rending some platforms obsolete. Numerous low-cost drones acting as a swarm could conceivably overwhelm an F-35 stealth fighter. (Hoadley, D. and Sayler, M. 2019). This cost saving would disproportionally benefit smaller militaries and non-state actors, if they are able to capture or acquire the AI code. Nations pursuing AI may level the playing field with potential adversaries, removing rather than preserving their advantage.
If weapons are too fast, small, numerous and complex, human decision makers will be unable to understand the situation sufficiently to change course. “AI systems could accelerate the pace of combat to a point that machine actions surpass the rate of human decision making, resulting in a loss of human control in warfare.” (Hoadley, D. and Sayler, M. 2019).
If an AI application kills someone, who is responsible? Without a clear answer to this question, the ability to prevent or even discourage war crimes in future conflicts may be impossible.
Mission Command
The value of AI in planning is that it will propose solutions that no human could conceive, and thereby surprise an adversary. A Conference in April 2019 at the Army War College asked the following questions. Who is truly in command in this scenario, when an AI develops a plan too complicated for human’s to understand? The commander incapable of understanding the plan? The AI? Those input the variables under which the AI operates? Or the AI’s original programmers.? (Freedberg, S. 2019). Failing to accept plan because you don’t understand it would disadvantage your forces against an adversary willing to trust their AI. However, you’d lack the ability to detect whether the plan was going awry. Time honored principles of command, and even principles of war are brought into question. Massing forces at a decisive point, surprise, unity of command, simplicity may be discarded by the AI, employing a mathematical proof impossible to comprehend. (Freedberg, S. 2019).
Impact on Soldiers
A robot may follow seemingly absurd orders without question. A human Soldier is inclined to hesitate, particularly if leaders can’t explain the orders either. This is even more true in countries which place value on the individual. In the absence of LAWS, effective execution of AI developed strategies may require Soldiers trained to robotically follow orders without question or understanding.
Entrusting their lives to incomprehensible planners will require a level of faith resembling religious devotion, as critical thinking will have to be set aside in favor of the machine’s conclusions. Those Soldiers would simultaneously require the technical acumen to maintain the various technologies at their disposal. If the equipment fails, as all military equipment in inclined to do, the Human Soldiers accustomed to relying on it will be wholly unaccustomed to planning and making decisions themselves. The characteristics best suited to such a scenario are an ill fit for western Soldiers.
Recognizing the challenges to mission command, DARPA is pursuing the development of explainable AI, to present AI strategies in ways human troops can understand, without sacrificing the cutting-edge capabilities AI offers. If this effort is unsuccessful the use of LAWS will likely become unavoidable.
Regardless, General Selva then-Vice Chairman of the Joint Chiefs of Staff General Paul Selva explained that the military will be compelled to address the development of this class of technology in order to find its vulnerabilities, given the fact that potential U.S. adversaries are pursuing LAWS. full discussion of LAWS, see CRS Report R44466, Lethal Autonomous Weapon Systems: Issues for Congress, by Nathan J. Lucas. (Hoadley, D. and Sayler, M. 2019)
Conclusion
Restates the problem, areas for further research
“The fact that many technological developments will come from the commercial sector means that state competitors and nonstate actors will also have access to them, a fact that risks eroding the conventional overmatch to which our Nation has grown accustomed.” (Hoadley, D. and Sayler, M. 2019).
An AI conflict may not favor the technologically advanced nation capable of developing an AI, but the nation able to manufacture larger numbers of cheaper weapons, and steal an AI capable of directing them.
Cite this Page
Argumentative Paper – Which Technological Advance will Most Influence Future Conflicts?. (2023, Feb 15). Retrieved from https://phdessay.com/argumentative-paper-which-technological-advance-will-most-influence-future-conflicts/
Run a free check or have your essay done for you