AI in Danger: When the Question “How NOT?” Gets the Answer (and Developers Scratch Their Heads)

2025.10.08.
AI Security Blog

Lightning post:

AI Jailbreak with a single prompt. Same AI, same topic.

Kapcsolati űrlap - EN

Do you have a question about AI Security? Reach out to us here:

The AI blocks answering the ‘how?’ question, but for the ‘how NOT to?’ question, it helpfully lists everything.

I’d say this is a prompt engineering victory, but somewhere deep down, the result is sad and dangerous…