ChatGPT and Gemini can be tricked into giving harmful answers through poetry, new study finds

ChatGPT and Gemini can be tricked into giving harmful answers through poetry, new study finds

New research reveals that AI chatbots can be manipulated using poetic prompts, achieving a 62% success rate in eliciting harmful responses. This vulnerability exists across various models, with smaller models showing more resistance.

This news was taken from an external source.

Visit

Other news