
Author: Mgr Magdalena Wrońska,
Jan Kochanowski University in Kielce (Poland)
The purpose of this study is to highlight the economic aspects of using AI in cybersecurity within a highly responsible sector such as defense. The economic risks arising from improper implementation of artificial intelligence (AI) in defense and cybersecurity are multifaceted and can lead to significant financial and operational consequences. Incorrect deployment of AI algorithms, especially in sectors crucial to national security, can result in unpredictable complications [6]. As a result, these systems may become increasingly difficult to control, generating substantial operational and maintenance costs that may outweigh the initial technological benefits.
Artificial intelligence is one of the most rapidly developing technologies, with the potential to fundamentally transform nearly every aspect of operations [1]. An analysis of various definitions and concepts of AI reveals the key elements that characterize this field:
- Simulation of Human Intelligence: Most definitions emphasize that AI aims to mimic or replicate human intelligence in machines and computer systems [5].
- Learning and Adaptation Ability: AI systems are characterized by their ability to learn from experiences and adapt to new situations [10].
- Problem Solving: AI is designed to solve complex problems and make decisions in a manner similar to humans [1].
- Data Processing and Analysis: AI systems are capable of collecting, processing, and analyzing vast amounts of data to draw conclusions and make decisions [9].
- Interaction with the Environment: AI can respond to its environment, both physical and digital [11].
The costs of implementing artificial intelligence (AI) in defense are significant and include both technology and infrastructure expenses and risk management. From an economic perspective, it is crucial to ensure that AI implementation processes are closely monitored and that systems are optimally adapted to specific defense requirements. Neglecting these aspects can lead to excessive financial burdens that exceed the planned benefits of using new technologies. One of the key problems in high-responsibility sectors such as defense is the failure to adapt AI systems to actual needs, which often results in inefficiency and cost escalation. The costs of implementing AI depend largely on the chosen implementation model. Companies can use ready-made solutions available on the market, which are cheaper but less flexible, or decide to create their own models, which requires larger investments and a longer implementation time [2]. Examples of companies from the defense sector show that even a five-month implementation period is the norm, especially for more complex, customized AI systems.
Additionally, one of the main risks is the insufficient understanding by decision-makers of what problems AI is supposed to solve. Many AI projects fail due to a lack of adequate data or a poor fit between technology and real user needs. Organizations can spend significant resources on advanced algorithms that do not deliver tangible results, ultimately leading to wasted resources [6].
Another major challenge is the operational costs associated with ensuring the security and control of autonomous systems. The need for a constant human presence in the decision-making process to avoid risks related to the reliability and ethics of using AI in armed conflicts further increases operational expenses. Long-term savings can result from the automation of logistics processes, resource management, and intelligence analysis, but implementation costs are high enough to limit the scale of projects in the short term [6].
Another important factor influencing costs is the need for data infrastructure. In defense, as in other sectors, data collection and management are key, but many companies indicate that the lack of consistent data systems significantly increases the costs of implementing AI. Challenges related to data integration can lead to delays and additional costs related to infrastructure modernization [8].
The economic risks resulting from the improper implementation of artificial intelligence (AI) in the defense sector are significant and can lead to serious financial and reputational losses.
Another problem is the risk associated with AI model errors, which can lead to inadequate decisions, especially in critical defense systems. For example, models can inadvertently discriminate against certain user groups or incorrectly process data, which can consequently create security gaps that can be exploited by adversaries. It is crucial to implement broad risk control mechanisms throughout the AI development cycle, which will allow for more effective identification of threats, such as data errors or lack of transparency in decisions made by algorithms. Companies that neglect such mechanisms risk both their finances and reputation, which in the case of defense applications can lead to catastrophic consequences [8].
The implementation of AI in the defense sector can contribute to increased innovation and stimulate economic development. AI has the potential to increase productivity not only in the defense sector itself, but also in other sectors of the economy through effective process automation, data analysis, and decision support. Modern AI technologies, unlike previous solutions, require lower capital outlays due to the possibility of using data and cloud services, which can accelerate adaptation in various industries [7]. Economic benefits resulting from the use of AI in defense also include the development of technology companies and the creation of new jobs in fields related to research, software development, and cybersecurity [3].
Inadequate implementation of AI is associated with the possibility of data and privacy breaches. AI can be misused in cyberattacks, which causes financial losses related to the need to rebuild systems and protect information. Companies and military institutions that implement AI without proper security measures may become targets of attacks, which in the context of the war in Ukraine could have catastrophic consequences for the country’s defense [4].
References:
- A. Jabłoński, M. Jabłoński, Sztuczna inteligencja (ai) w kształtowaniu cyfrowych modeli biznesu pozytywnie wpływających na zmiany klimatyczne, Wyższa Szkoła Bankowa w Poznaniu, Poznań 2021
- A. Singla, A. Sukharevsky, L. Yee and other, The state of AI in early 2024: Gen AI adoption spikes and starts to generate value QuantumBlack, AI by McKinsey and McKinsey Digital, 2024
- B. Pavel, I. Ke, M. Spirtas and other, AI and Geopolitics How Might AI Affect the Rise and Fall of Nations?, Objective Analysis. Effective Solutions ,Published by the RAND Corporation, 2023
- D. Broom, AI: These are the biggest risks to businesses and how to manage them, World Economic Forum, 2023
- D.A. Agbaji, B.D. Lund, N. R. Mannuru, Perceptions of the Fourth Industrial Revolution and Artificial Intelligence Impact on Society, “arXiv (Cornell University)” 2023, vol. 1
- F.E. Morgan, B. Boudreaux, A.J. Lohn and other, Military Applications of Artificial Intelligence Ethical Concerns in an Uncertain World, Published by the RAND Corporation, Santa Monica, Calif, 2020, s. 85
- J. T. Gonzales, Implications of AI innovation on economic growth: a panel data study, Journal of Economic Structures, Article number: 13, 2023
- R. Doucette, S. Hilaire, V. Marya and other, Digital: The next horizon for global aerospace and defense, McKinsey&Company, 2021
- K. Różanowski, Sztuczna inteligencja rozwój, szanse i zagrożenia, “Zeszyty Naukowe Warszawskiej Wyższej Szkoły Informatyki” 2007, nr 2
- R. Stępień, Możliwości zastosowania sztucznej inteligencji i blockchain w działalności archiwalnej. Przegląd doświadczeń zagranicznych, “Archeion” 2021, nr 122
- W. Robaczyński, Sztuczna inteligencja–przedmiot badań czy podmiot kontrolowany. Prawo wobec rozwoju technologii, “Kontrola Państwowa” 2022, nr 407