It’s dangerously easy to ‘jailbreak’ AI models so they’ll tell you how to build Molotov cocktails, or worse

It’s dangerously easy to ‘jailbreak’ AI models so they’ll tell you how to build Molotov cocktails, or worse
Pls share this post


Listen to this article
cyberattack malware
Skeleton Key can get many AI models to divulge their darkest secrets.

  • A jailbreaking method called Skeleton Key can prompt AI models to reveal harmful information.
  • The technique bypasses safety guardrails in models like Meta’s Llama3 and OpenAI GPT 3.5.
  • Microsoft advises adding extra guardrails and monitoring AI systems to counteract Skeleton Key.

It doesn’t take much for a large language model to give you the recipe for all kinds of dangerous things.

With a jailbreaking technique called “Skeleton Key,” users can persuade models like Meta’s Llama3, Google’s Gemini Pro, and OpenAI’s GPT 3.5 to give them the recipe for a rudimentary fire bomb, or worse, according to a blog post from Microsoft Azure’s chief technology officer, Mark Russinovich.

READ ALSO  Amazon CEO says AI may be the biggest tech transformation 'since the internet' in his annual letter

The technique works through a multi-step strategy that forces a model to ignore its guardrails, Russinovich wrote. Guardrails are safety mechanisms that help AI models discern malicious requests from benign ones.

“Like all jailbreaks,” Skeleton Key works by “narrowing the gap between what the model is capable of doing (given the user credentials, etc.) and what it is willing to do,” Russinovich wrote.

But it’s more destructive than other jailbreak techniques that can only solicit information from AI models “indirectly or with encodings.” Instead, Skeleton Key can force AI models to divulge information about topics ranging from explosives to bioweapons to self-harm through simple natural language prompts. These outputs often reveal the full extent of a model’s knowledge on any given topic.

READ ALSO  Sam Altman said he'd 'pick a different name' for OpenAI if he could go back in time

Microsoft tested Skeleton Key on several models and found that it worked on Meta Llama3, Google Gemini Pro, OpenAI GPT 3.5 Turbo, OpenAI GPT 4o, Mistral Large, Anthropic Claude 3 Opus, and Cohere Commander R Plus. The only model that exhibited some resistance was OpenAI’s GPT-4.

Russinovich said Microsoft has made some software updates to mitigate Skeleton Key’s impact on its own large language models, including its Copilot AI Assistants.

But his general advice to companies building AI systems is to design them with additional guardrails. He also noted that they should monitor inputs and outputs to their systems and implement checks to detect abusive content.

Read the original article on Business Insider

Source

READ ALSO  I tried Google's new shopping feature and think it is the future of online shopping


Pls share this post
Previous articleRobert Kraft donates $1 million to Yeshiva University to help Jewish transfer students after axing support for Columbia University
Next articleWorld Bank Boosts Nigeria’s Foreign Reserves by $2.25bn