“Black Hat USA 2024 Insights: The Growing Danger of Lethal AI Attacks”

Vishal Singh
3 Min Read

According to a new AI Threat Landscape Report from cybersecurity company HiddenLayer, an astounding 98% of IT insiders agree different AI models are necessary for company success.

Head of threat intelligence at HiddenLayer Chloé Messdaghi told Cybernews that attackers—financial or otherwise motivated—are very aware of companies’ reliance on artificial intelligence and are actively creating strategies to capitalize on it.

“We’re kind of caught up in a game,” Messdaghi said during the Black Hat conference in Las Vegas.

While most businesses use artificial intelligence (AI) for everyday operations, not all Chief Information Security Officers (CISOs) are aware of it, therefore compromising the security posture of the enterprises. For example, Messdaghi cited a circumstance when a chief of cybersecurity was ignorant of the fact that his organization made about two thousand unique AI model variants.

Driven by motives ranging from intellectual property theft to competition hampering, attackers have developed several ways to use artificial intelligence for harmful goals like data poisoning, model evasion, and model theft. Though generative artificial intelligence is just a few years old, firms like ByteDance, the owner of TikHub, have developed their large language model (LLM) using ChatGPT’s application programming interface (API), therefore attaining a competitive edge.

Conversely, financially driven hackers might target generative AI filters with prompt or code injection attacks or pervert AI artifacts in supply chain assaults through code execution, malware distribution, and lateral movement. There is great risk involved in upsetting artificial intelligence algorithms controlling self-driving cars.

Should their AI models be compromised, Messdaghi emphasizes the areas of healthcare, military, and finance most likely to suffer. For loan approvals, for instance, biased or corrupted decision-making could have major negative effects on society and the economy. A flawed AI model might misdiagnose a patient in healthcare, therefore compromising their health or maybe resulting in death. Likewise, terrorists acquiring access to military-grade AI-powered systems—such as those utilized for drone operations—could have fatal results.

As more businesses apply artificial intelligence models, Messdaghi projects a notable rise in adversarial assaults against AI. Attackers are unlikely to overlook the chance to take advantage of this vector, particularly given many businesses lack awareness of its weaknesses.

Companies have to change with the changing threat environment if they want to safeguard end users and clients. Messdaghi advises red team training, awareness of AI-based exposure, search for variations in AI output, and bettering of data scientist, developer, and security team communication.

Share This Article
Follow:
👋 Hello, I’m Vishal ! As a dedicated expert in Crypto, Finance, Education, Apps & Games, and Making Money Online, I’m committed to providing you with reliable, insightful, and up-to-date information. My goal is to empower you with clear, actionable advice and transparent analysis to help you make informed decisions in today’s dynamic digital landscape. Trustworthy content and genuine value are my top priorities—let’s navigate this journey together! 🚀💰📚
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *