ARTIFICIAL INTELLIGENCE IN WEAPONRY 3 Running head: ARTIFICIAL INTELLIGENCE IN WEAPONRY 1

ARTIFICIAL INTELLIGENCE IN WEAPONRY 3

Running head: ARTIFICIAL INTELLIGENCE IN WEAPONRY 1

Artificial Intelligence in Weaponry

Student’s Name

Institution Affiliation

Introduction

The case study at hand is titled A.I. & Global Governance: When Autonomous Weapons Meet Diplomacy. It covers the relationship between global governance and artificial intelligence, with the major focus being on autonomous weapons in the face of diplomacy. United nation also has its views on the aspect of artificial intelligence chiefly because as the automation of weaponry increases, the arms race will also rise (Eugenio, 2021). The primary concern of artificial intelligence and its subsequent impact, in the long run, is still unpredictable, with a significant number of experts foreseeing a situation whereby artificial intelligence will have more harm than good on international relations. Countries have been working around the clock to ensure that they get in front of other countries when it comes to mastering the art of this technology, more so in the weaponry sector.

Main Problems Connected to the use of A.I.

The main problem mentioned in this case study is that the use of artificial intelligence in the field of weaponry will likely create a dangerous arms race characterized by trigger and accelerated indiscrimination of proliferation. It, therefore, means that there will be threatening long-term risk whereby the humans will lose control over this force. Even with the accuracy of the robots and their ability to reduce human involvements in wars, the renunciation of using these machines in wars is relatively too dangerous. It, therefore, means that as the dehumanization process rises, the escalation of warfare will increase hence shrinking the need for diplomacy in settling wars and misunderstandings between nations. The other problem with these machines is that they will likely compromise the right to life. According to Eugenio (2021), the overall concern of this technology is its capability to compromise the international humanitarian law (IHL), given that machines do not have the human-like capabilities of seeing aspects from the big picture and under real-life conditions. Given that it cannot figure out the whole scenario, its decision-making is not proportional, distinct, and adequately assessed as it is acting based on the coded commands.

Whereas mechanical systems have characteristic accuracy and efficiency, there will always be instances where they experience technical problems. At such points, human intervention will become inevitable as they seek to address malfunctions, failures, and even errors in the systems and software coding used (Eugenio, 2021). If the intervention process is unsuccessful or is not detected early enough, there will be disastrous consequences. It is also necessary to factor in aspects such as cyber-attacks which can tamper with the overall operations of the system. The third parties who are non-state actors can gain access to these systems and compromise the diplomatic status of the countries involved. The rise in terrorism has also increased the fears of using artificial intelligence in machinery and weaponry. 

The critical parts of the case study presented is the issue of handling A.I., how it can compromise humanity, and how it can have adverse effects such as weakening relationship between nations. Without proper regulation of this technology in line with fully autonomous weapons, the overall sophistication of these components means that it will be disastrous (Eugenio, 2021). The fact that they are controlled using a programmed paradigm rather than human instincts and knowledge, these devices will likely to cause severe damages. It can also cause false alarms and mistakes, which can compromise the overall peace in the world as their actions might be interpreted differently by attacked nations. The systems do not also have self-learning, which could have been ideal in studying, analyzing, and understanding the situation before executing attacks. 

Handling Artificial Intelligence

The pitfalls explained above can be countered if proper handling of this technology is fostered through proper management of its usage so that the mentioned pitfalls are avoided. Artificial intelligence technology can be a success if ideal measures are put in place to contain the worries that come with the use of artificial intelligence in weaponry. Proper planning and control will improve the success of these technologies hence the need to ensure that the government has enough resources to control these systems. The military powers should have proper planning in place, which can help in ensuring that there is an ideal paradigm set to enhance the use of A.I. in weaponry. Through such an approach, the use of artificial intelligence in the military will enhance the efficiency and economization of this sector as well as the overall tactical superiority of the country. 

The goodness of such technologies is that they will help in the operations where conditions are too unfavorable for humans combatants hence mitigating the possible casualties in such areas. Proper use of A.I. can help in handling extreme activities such as deactivating bombs, clearing land mines as well as partaking in rescue missions. Additionally, ethical use of these technologies can help in other missions such as penetrating enemy territories as well as exploring dangerous caves without risking human lives. According to Allen & Wallach (2009), there is a need for a better focus on enhancing machine morality and computer ethics as a way of ensuring that the use of technology does not compromise human existentialism. The ability to modify the content using artificial intelligence and robotics should be sophisticated that these technologies can detect an error and send an alert. Ethical use of technologies should attain the overall goal of compensating human errors while reducing chances of exposing humanity to dangers. It should therefore be a component that reduces harm and losses to humanity as that is the ultimate goal for advances in technologies.

Conclusion

Relatively all aspects of the universe have advantages and their disadvantages, and the use of artificial intelligence is not an exception. The discussion above shows both the positive and negative aspects that come with the use of this technology. Artificial intelligence can be a good thing, but it will boil down to the user, whereby if the person in charge will handle it accordingly, it can achieve the intended goal. The use of artificial intelligence, more so in weaponry, will have long-term and short-term effects on the international community and its relations. The effects will either be positive or negative depending on the way they are used and managed hence the need for a proper framework on how these systems will be deployed.

References

Allen, C., & Wallach, W. (2009). Moral machines: teaching robots right from wrong. Choice Reviews Online, 46(08), 13-23. https://doi.org/10.5860/choice.

Eugenio, V. (2021). A.I. & Global Governance: When Autonomous Weapons Meet Diplomacy – United Nations University Centre for Policy Research. Cpr.unu.edu. Retrieved 22 April 2021, from https://cpr.unu.edu/publications/articles/ai-global-governance-when-autonomous-weapons-meet-diplomacy.html.