As global, regular use of generative AI has nearly doubled over the past year, according to McKinsey, its rapid adoption has created a lucrative new target for cybercriminals. Although “off-the-shelf” solutions account for a large portion of this adoption, organizations, recognizing the power of tailored, business-specific responses, have devoted a large portion of their budget to training their own models. of AI.
Continuing innovations, like agentic AI, mean adoption is only increasing. The considerable autonomy available to agentic AI allows it to make decisions, plan actions and learn from its experiences in the specific context of a business, making it applicable to all business functions .
However, amid the excitement around AI, considerable cybersecurity risks are emerging, which too often go unaddressed. By adopting any new software solution, businesses introduce a new attack vector for cybercriminals. The problem with internally developed AI models is that they are essentially a repository for a company’s most valuable data, ranging from intellectual property to customer and employee data to trade secrets , which makes it a very attractive target.
This software runs on hardware likely hosted in a data center. Business leaders must therefore ensure they have the appropriate tools to control all aspects of their network and ensure the security of sensitive company data in the new attack vectors they adopt.
CEO and co-founder of Goldilocks.
The challenge of existing frameworks
The fact is that many companies’ existing security configurations are currently not fit for purpose. For years, IT departments have viewed cybersecurity as a barrier to compliance rather than a means to protect company data. This has led to an over-reliance on perimeter defenses and single sign-on solutions, which can create a false sense of security for organizations that think compliance equals security.
Software solutions and more traditional approaches to data security, such as firewalls, still have their place in protecting an organization’s data security, but more in-depth defense is required to ensure it runs smoothly operations. AI, while powerful, is still a type of software running on hardware typically found in a data center. Data centers are complex and sensitive environments. Factors such as energy requirements, cooling systems and physical security make these facilities prime targets. Additionally, the nature of AI development and deployment requires frequent access and updates. This requires strict control over who can access these systems and when. Organizations must ensure they have the appropriate framework in place to ensure their AI models operate correctly and are protected at all operational levels.
Physical Segmentation: Establishing Control and Defense
Many of them already have some of the required components. What the majority lack is a first and last layer of defense that can be established through physical network segmentation. Using a hardware approach, physical network segmentation allows users to segment all digital assets remotely, instantly and without using the Internet. With the press of a button, from anywhere in the world, organizations can use this technology to physically isolate the chosen segment from the overall network, disconnecting it from the Internet. This technology acts as a gatekeeper to AI, controlling access and ensuring its benefits can be reaped. For businesses using AI, it can provide the following benefits:
1. Improved security and reduced risk
In the context of protecting an AI model, this type of protection can act as a gatekeeper, preventing the poisoning of a company’s AI and preventing its use for malicious purposes.
Without an Internet connection, physical network segmentation can be used to disconnect the model, preventing a cyber-attack or unwanted access. This will hide assets and improve an organization’s existing defense depth. For AI models, network segmentation can be used to keep components offline until needed, significantly reducing the window of time a hacker has to gain access to the software.
Organizations may be hesitant to take this approach, believing it would cause disruptions to their operations. But that doesn’t have to be the case. The key is to implement a process that establishes a smart, well-thought-out schedule. A generative AI model does not necessarily need to be connected to the Internet 24/7 to function properly. A login is required for a short window when users send a prompt. Once sent, the model can be disconnected and reconnected once the response has been generated and should be returned. This short period of time is nowhere near enough time for a cybercriminal to clone the template and get their hands on the company’s sensitive data. In terms of user experience, the time it takes to connect and reconnect should be short enough that humans are not aware of a delay.
2. Help with regulatory compliance
Governments around the world are adapting to data sensitivity. With AI models hosting such an array of sensitive data, all eyes are on companies to prove they are doing everything possible to prevent an attack or breach. In the absence of AI-specific regulations, it’s difficult to know where to start. Physical network segmentation can help overall compliance because there’s no better effort than keeping sensitive data completely off the Internet or physically segregated in the event of an attack.
3. Effective incident response and recovery
In the event of a cyber attack, reactive network segmentation can be used to prevent the attack from spreading and quickly isolate compromised assets and data, preventing further access by hackers. During the recovery process, executives will have the ability to quickly reconnect previously isolated segments known to be safe after an attack, helping to ensure that AI models can be used as quickly as possible and ensuring the restoration of services.
Looking to the future
With more and more AI models trained in-house, cybercriminals will most likely start targeting these sensitive data repositories. Once they gain access to AI, all sorts of havoc can be wreaked by the ability to clone the data, poison the model to generate harmful responses, or lock it down with ransomware, causing significant damage to the company.
Organizations must be able to confidently harness the power of AI without compromising security. By implementing a framework enabling individual zone control through network segmentation, business leaders will be able to not only mitigate threats, but also establish effective response and recovery processes while ensuring maximum performance across the enterprise.
We’ve put together a comprehensive list of the best AI tools.
This article was produced as part of TechRadarPro’s Expert Insights channel, where we feature the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you would like to contribute, find out more here: