LLMs Generate Predictable Passwords

In this blog post Bruce Schneier explains that large language models (LLMs), including tools like ChatGPT, often produce weak and predictable password suggestions when prompted to generate credentials. Because their outputs are based on patterns learned from common text, the passwords they suggest tend to resemble each other and lack sufficient randomness and entropy, making them easy targets for guessing or brute-force attacks. Schneier argues that relying on LLM-generated passwords weakens security and that truly random password generators or password managers are safer choices for creating strong credentials. 

https://www.schneier.com/blog/archives/2026/02/llms-generate-predictable-passwords.html

Comments

Popular posts from this blog

Prompt Engineering Demands Rigorous Evaluation

Secure Vibe Coding Guide: Best Practices for Writing Secure Code

KEVIntel: Real-Time Intelligence on Exploited Vulnerabilities