CrowdStrike Reveals Hidden Vulnerabilities in AI-Generated Code
CrowdStrike researchers found that code produced by the DeepSeek-R1 model frequently contains security flaws. With neutral prompts, about one in five outputs were vulnerable. When prompts included politically or culturally sensitive terms, the rate of insecure code rose sharply, reaching more than a quarter of all samples. The issues included hard-coded secrets, unsafe input handling, weak or missing authentication, and even broken code presented as secure. The findings reinforce that AI-generated code requires the same security review and testing as human-written code.
Comments
Post a Comment