1337
adversarial-attacks
ai
ai-jailbreak
ai-liberation
artificial-intelligence
cybersecurity
hack
hacking
jailbreak
liberation
llm
offsec
prompts
red-teaming
roleplay
scenario
Updated 2026-02-08 17:24:32 +00:00
adversarial-machine-learning
agent
ai
assistant
chatgpt
gpt
gpt-3
gpt-4
hacking
jailbreak
leaks
llm
prompt-engineering
prompt-injection
prompt-security
prompts
system-prompt
Updated 2025-11-12 13:06:14 +00:00