UncategorizedIt Takes Only 250 Documents to Poison Any AI Model Posted on October 22, 2025 by Onsite Computing, Inc. Researchers find it takes far less to manipulate a large language model’s (LLM) behavior than anyone previously assumed. Go to Source Author: Jai Vijayan, Contributing Writer Onsite Computing, Inc. Pwn2Own Day 2: Hackers exploit 56 zero-days for $790,000 Iranian hackers targeted over 100 govt orgs with Phoenix backdoor