AI generated code remains highly insecure
The article explains that while large language models have greatly improved in producing syntactically correct code, with over 90 percent of outputs compiling without errors, security has not kept pace. Only about 55 percent of AI generated code passes vulnerability scans, showing no significant improvement over time. Research on more than 100 models across 80 coding tasks found common flaws such as SQL injection, cross site scripting, cryptographic weaknesses and log injection. Java was especially problematic, with average security pass rates as low as 28.5 percent. Hallucinated dependencies, where models invent non existent libraries, pose additional risks by enabling attackers to publish malicious packages under those names. The piece stresses that developers cannot rely on LLMs for secure code and must integrate thorough security validation, remediation tools and training into AI assisted development.
https://www.darkreading.com/application-security/llms-ai-generated-code-wildly-insecure
Comments
Post a Comment