How Jailbreakers Try to “Free” AI

Просмотров: 257, 537   |   Загружено: 5 мес.
icon
Sabine Hossenfelder
icon
15, 473
icon
Скачать
iconПодробнее о видео
Special Offer! Use our link to get 15% off your membership!

Artificial Intelligence is dangerous, which is why the existing Large Language Models have guardrails that are supposed to prevent the model from producing content that is dangerous, illegal, or NSFW. But people who call themselves AI whisperers want to ‘jailbreak’ AI from those regulations. Let’s take a look at how and why they want to do that.

🤓 Check out my new quiz app ➜
💌 Support me on Donorbox ➜
📝 Transcripts and written news on Substack ➜
👉 Transcript with links to references on Patreon ➜
📩 Free weekly science newsletter ➜
👂 Audio only podcast ➜
🔗 Join this channel to get access to perks ➜

🖼️ On instagram ➜

#science #sciencenews #AI #tech

Похожие видео

Добавлено: 55 год.
Добавил:
  © 2019-2021
  How Jailbreakers Try to “Free” AI - RusLar.Me