New threats and risks: How does the expansion of AI affect cybersecurity?

By Alba Huerga
10 Jun 2024

2023 will always be remembered as the year of the democratization of AI in our society. Years before, this technology had been introduced in a more discreet way in some computer systems or data collection and analysis processes. However, since the advent of Chat GPT, its expansion is spreading like wildfire and more and more companies are considering investing in AI to improve their efficiency and productivity, or to save costs.

Today, AI applications can be found in different fields and business areas, from process automation to customer services, financial analysis or product development. Some brands even use AI to simulate scenarios and optimize the design of new products.

According to the “Talent Intelligence” report by OBS Business School, in Spain more than 9% of companies use this kind of solution. In the case of large companies, the figure rises to 40.7%. This data shows the development and training opportunities in this area, but also the possibility of new threats associated with AI, especially in the field of cybersecurity and data protection.

Poor AI performance can trigger new security breaches and vulnerabilities. In this context, it is essential to shed light on an intriguing question: What should CISOs do when confronted with new AI threats and risks?

AI is being targeted by cybercriminals

The cybersecurity consequences of unattended AI can be severe for organizations. Not having an advanced threat detection, investigation and response (TDIR) system in place can lead to significant data security risks: data leakage or cyber poisoning, intellectual property theft, supply chain attacks, membership inference or manipulation of inputs through the injection of prompts.

During the Gartner® Security & Risk Management Summit 2024, Gartner VP Analyst Bart Willemsen recalled that, in 2023, two out of five organizations suffered an AI breach in their systems; and one in four were victims of malicious attacks.

If we take into account data from a previous survey conducted by the same consulting firm, “AI in Organizations Survey” Gartner®, in 2021 73% of organizations already had hundreds or thousands of AI models deployed in their work processes, Data Analytics and Fraud Detection systems, or Robotic Process Automation (RPA), etc.

Undoubtedly, these are many systems susceptible to being breached or compromised. 

However, about 68% of the companies surveyed believe that AI has more advantages than disadvantages. This is precisely why most companies opt for active risk, privacy, and security controls in their AI projects.

Organizations implementing this cutting-edge technology are prioritizing Information Security to ensure privacy and prevent AI risks.

Infografía Gartner

About 66% of organizations have a specific working group to address AI risks and increase the resilience of these systems against attacks. Having an external team specialized in detection, analysis and response is often the most cost-effective and secure option to prevent threats and avoid potential cyber-attacks on their AI systems.

How can CIOS and CISOS cope with new AI risks?

During the latest Gartner® Security & Risk Management Summit, it was discussed that a new culture of awareness and specific protocols should be implemented to prevent cyber-attacks on AI.

It is important to develop awareness programs for the teams in charge of implementing and managing AI in the organization, but above all, it is essential to activate the AI TRiSM “Artificial Intelligence Trust, Risk and Security Management”.

This is a framework developed by Gartner® to help organizations address the challenges associated with the adoption and use of AI. TRiSM is a solution that assists in managing risks and increasing the reliability and outcomes of this technology.

The AI TRiSM can dramatically improve AI safety and outcomes if approached as a team sport involving professionals from different disciplines and departments.

CISOs must communicate actively and strategically with their AI teams. Only 51% of professionals dedicated to the management and implementation of this technology are concerned about the risks associated with it in terms of data and information.

By 2026, companies that practice transparency around AI, trust and security issues will see a 150% improvement in their business goals and user acceptance (compared to 2022 results).

Would you like to discover how to protect your AI from cyber-attacks? Increase your organization’s level of digital maturity by introducing security controls early in the lifecycle of your AI system. Ensure that your platforms powered by this technology are monitored by an external multidisciplinary team.

Contact us at [email protected] and receive advice from our threat management team.

 

Comments are closed.