1.4.3. Modification of the typical threat character
Properties of AI such as efficiency, upgradability and capacities surpassing those of humans may enable very relevant attacks. Attackers are often faced with a compromise between the frequency, the extent of their attacks and their efficiency. For example, spear phishing is more effective than classical phishing, which does not involve adapting messages to individuals, but it is relatively costly and cannot be conducted en mass. More generic phishing attacks are profitable despite their very low success rates, simply because of their extent. If the frequency and upgradability of certain attacks, including spear phishing, are improved, AI systems can mitigate these compromises. Moreover, properties such as efficiency and upgradability, particularly in the context of target identification and analysis, lead also to finely targeted attacks. The attackers are often interested in adapting their attacks to the characteristics of their targets, aiming at targets with certain properties, such as significant assets or an association with certain political groups. Nevertheless, the attackers must often find a balance between efficiency, the upgradability of their attacks and target precision. A further example could be the use of drone swarms that deploy facial recognition technology to kill specific individuals in a crowd, instead of less targeted forms of violence.
Cyberattacks are increasingly alarming in terms of complexity and quantity, a consequence of the lack of awareness and understanding of the actual needs. This lack of support explains the insufficient dynamism, attention and willingness to commit funds and resources for cybersecurity in many organizations. In order to limit the impact of cyberattacks, the following recommendations are suggested (Brundage et al . 2018):
– decision-makers should closely cooperate with technical researchers to study, prevent and limit the potential misuse of AI;
– researchers and engineers in the AI field should seriously consider the double-edged nature of their work, by allowing considerations linked to abusive use to influence the research priorities and norms and by proactively addressing concerned players when harmful applications are predictable;
– public authorities should actively try to broaden the range of stakeholders and experts in the field that are involved in the discussions related to these challenges.
AI is a broad domain to be explored by cybersecurity researchers and experts. As the capacity of intelligent systems increases, they will first reach and then surpass human capacities in many fields. In cybersecurity, AI can be used to strengthen the defenses of computer infrastructure. It is worth noting that, as AI covers fields considered reserved to humans, the security threats will increase in variety, difference and intelligence compared to actually existing techniques. Defense against these threats is very difficult, as cybersecurity experts themselves can be targeted by spear phishing attacks. Consequently, preparing for potential misuses of AI associated with this transition is an important task. The use of intelligent techniques aims to identify real-time attacks, with little or no human interaction, and to stop them before they cause damages. In conclusion, AI can be considered as a powerful tool in solving cybersecurity problems.
Agarwal, R. and Joshi, M.V. (2000). A new framework for learning classifier models in data mining [Online]. Available at: https://pdfs.semanticscholar.org/db6e/1d67f7912efa65f94807dc81b24dea2de158.pdf[Accessed January 2019].
Ahlan, A.R., Lubis, M., and Lubis, A.R. (2015). Information security awareness at the knowledge-based institution: Its antecedents and measures. Procedia Computer Science (PCS) . 72(2015), 361–373.
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., and Mané. (2016). Concrete problems in AI safety [Online]. Cornell University. Available at: https://arxiv.org/abs/1606.06565.
Anderson, D., Frivold, T., and Valdes, A. (1995). Next-generation intrusion detection expert system (NIDES). Report, US Department of the Navy, Space and Naval Warfare Systems Command, San Diego.
Aslahi-Shahri, B.M., Rahmani, R., Chizari, M., Maralani, A., Eslami, M., Golkar, M.J., and Ebrahimi, A. (2016). A hybrid method consisting of GA and SVM for intrusion detection system. Neural Computing and Applications , 27(6), 1669–1676.
Bace, R.G. (2000). Intrusion Detection . Sams Publishing, Indianapolis.
Balan, E.V., Priyan, M.K., Gokulnath, C., and Devi, G.U. (2015). Fuzzy based intrusion detection systems in MANET. Procedia Computer Science , 50, 109–114.
Barth, C.J. and Mitchell, J.C. (2008). Robust defenses for cross-site request forgery. Proceedings of 15th ACM Conference . CCS, Alexandria.
Biggio, B., Nelson, B., and Laskov, P. (2012). Poisoning attacks against support vector machines. 29th International Conference on Machine Learning . ICML, Edinburgh, 1467–1474.
Capgemini Research Institute (2019). Reinventing cybersecurity with artificial intelligence: The new frontier in digital security [Online]. Available at: https://www.capgemini.com/wp-content/uploads/2019/07/AI-in-Cybersecurity_Report_20190711_V06.pdf.
Chebrolu, S., Abraham, A., and Thomas. (2005). Feature deduction and ensemble design of intrusion detection systems. Computers & Security , 24(4), 295–307.
Chen, W.-H., Hsu, S.-H., and Shen, H.-P. (2005). Application of SVM and ANN for intrusion detection. Computers & Operations Research , 32(10), 2617–2634.
Cova, M., Balzarotti, D., Felmetsger, V., and Vigna, G. (2007). Swaddler: An approach for the anomaly-based detection of state violations in web applications. Proceedings of the 10th International Symposium on Recent Advances in Intrusion Detection . RAID, Gold Coast.
Cova, M., Kruegel, C., and Vigna, G. (2010). Detection and analysis of drive-by-download attacks and malicious JavaScript code. Proceedings of the 19th International Conference on the World Wide Web . WWW, Raleigh.
Crockford, D. (2015). Json [Online]. Available at: https://github.com/douglascrockford/JSON-js/blob/master/README[Accessed March 2018].
Cunningham, R. and Lippmann, R. (2000). Detecting computer attackers: Recognizing patterns of malicious stealthy behavior. Presentation, CERIAS, Anderlecht.
Ertoz, L., Eilertson, E., Lazarevic, A., Tan, P.N., Kumar, V., Srivastava, J., & Dokas, P. (2004). Minds-Minnesota intrusion detection system. Next Generation Data Mining , August, 199–218.
Fortuna, C., Fortuna, B., and Mohorčič, M. (2002). Anomaly detection in computer networks using linear SVMs [Online]. Available at: http://ailab.ijs.si/dunja/SiKDD2007/Papers/Fortuna_Anomaly.pdf.
Hajimirzaei, B. and Navimipour, N.J. (2019). Intrusion detection for cloud computing using neural networks and artificial bee colony optimization algorithm. ICT Express , 5(1), 56–59.
Hamamoto, A.H., Carvalho, L.F., Sampaio, L.D.H., Abrão, T., & Proença Jr, M.L. (2018). Network anomaly detection system using genetic algorithm and fuzzy logic. Expert Systems with Applications , 92, 390–402.
Han, X., Xu, L., Ren, M., and Gu, W. (2015). A Naive Bayesian network intrusion detection algorithm based on principal component analysis. 7th International Conference on Information Technology in Medicine and Education . IEEE, Huangshan.
Heckerman, D. (2008). A tutorial on learning with Bayesian networks. Innovations in Bayesian Networks , Holmes, D.E. and Jain, L.C. (eds). Springer, Berlin, 33–82.
Читать дальше