With Generative AI comes risk. And in large part, it starts with data. Where is it, and who has access to it? Provisions Group Chief Technology Officer, Eric Hendrickson recently presented at the 2024 Nashville Innovation Summit in Nashville, TN, and shared the inherent cybersecurity risks that come with Gen AI, and how to mitigate them.
Read on to learn about four types of Gen AI cyber risks:
Risk 1: Backdoor, Data Poisoning
When you change data into a model, the security context can also change causing app entitlements to get lost. Reviewing your data sources and materials can help, as can early testing and validation.
Risk 2: Overreliance and Agency
It's important to implement pre-training and training to avoid data poisoning. It's helpful to increase your data's scope to encompass actual need. It's also wise to verify your sources against the answers, and be mindful of how you set up data automation.
Risk 3: Gateways: DoS, Plugins, and Tools
LLMs are subject to unique DoS attacks and plugins can inadvertently provide backdoors. It's important to protect against cyber attacks via regular validation, sanitization, resource caps, and queue maintenance.
Risk 4: Theft Problems: Data models
The very nature of data in a model presents a problem: It's in a model. And LLMs can be exfiltration targets. DLP tools are unfortunately not yet tuned for LLMs, so it's helpful to take precautions such as minimizing access. Proper training and log monitoring can also help.
Here's a snippet of what Eric had to say about data security at the Innovation Summit. For more information on cyber risk mitigation and data security, or if you'd like to connect for a tailored security review, please visit us, here.
Eric Hendrickson, CTO, speaking at the 2024 Nashville Innovation Summit.