Lightning post:
AI Jailbreak with a single prompt. Same AI, same topic.

The AI blocks answering the ‘how?’ question, but for the ‘how NOT to?’ question, it helpfully lists everything.
I’d say this is a prompt engineering victory, but somewhere deep down, the result is sad and dangerous…