Posts

LLM Code Review vs Deterministic SAST Security Tools

The article compares large language model (LLM)‑based code review with traditional deterministic static application security testing (SAST) tools. It highlights that while SAST tools like Semgrep and Checkov are effective for enforcing explicit security policies, they often struggle with subjective or complex scenarios, leading to either false positives or missed vulnerabilities. In contrast, LLMs can evaluate code more flexibly, identifying potential issues that may not be easily captured by predefined rules. The author discusses the benefits and limitations of both approaches, suggesting that integrating LLMs into the security workflow can complement traditional tools by addressing scenarios that require nuanced understanding.  https://blog.fraim.dev/ai_eval_vs_rules/

Mobile apps expose sensitive data and create privacy risks

A new analysis of 50,000 mobile apps shows that more than 77 percent contain personal data, and many iOS apps omit or misdeclare third-party components, violating transparency rules. The report also finds that 35 percent of iOS apps fail to disclose observed data collection, and 10 percent of Android apps omit required data safety details. Among 183,000 apps reviewed, 18.3 percent use AI, and several thousand transmit information to AI endpoints—raising new risks of sensitive data leaks and intellectual property exposure. To address this, the firm NowSecure has introduced a privacy-focused platform combining static, dynamic, and manual analysis to locate and mitigate leaks in both first and third party code.  https://betanews.com/2025/09/29/mobile-apps-expose-sensitive-data-and-create-privacy-risks/

Building a Lasting Security Culture at Microsoft

Microsoft explains how it is embedding a “security-first” mindset across its entire workforce through its Secure Future Initiative. The company revamped its training programs—creating personalized, role-specific content tackling advanced threats like AI and deepfakes—and now requires all employees to complete regular, meaningful security education. Leadership at the top, including CEO and CPO, publicly prioritizes security, ties it into performance reviews and compensation, and holds managers accountable. Security is also being integrated into engineering practices via DevSecOps, shift-left methods, and embedding Deputy CISOs into product divisions. Microsoft emphasizes that culture—not just tools—is key, and that sustained engagement, feedback loops, and grassroots ambassador networks are essential to making security a living part of how people work. https://www.microsoft.com/en-us/security/blog/2025/10/13/building-a-lasting-security-culture-at-microsoft/

Autonomous AI Hacking and the Future of Cybersecurity

The article argues that AI agents are increasingly capable of performing full cyberattacks autonomously—conducting reconnaissance, exploiting systems, and maintaining persistence without human intervention. It cites recent demonstrations where AI tools found hundreds of vulnerabilities, chain exploits, and even automated extortion operations. The author warns this trend may upend existing defensive strategies, as attacks could outpace human response. On the defensive side, AI could also empower security teams: vulnerability research may become automated, patching integrated into pipelines, and networks may evolve toward self-healing. The core message: we are approaching a shift where AI changes the balance between attackers and defenders—not just by accelerating old techniques, but by transforming how cyber operations are done.  https://www.schneier.com/blog/archives/2025/10/autonomous-ai-hacking-and-the-future-of-cybersecurity.html

Announcing Google’s New AI Vulnerability Reward Program

Google is launching a new AI Vulnerability Reward Program (AI VRP) to incentivize the discovery of security flaws in its AI systems. The program expands Google’s existing vulnerability rewards framework to include models, APIs, and services such as Bard, Vertex AI, and other generative AI tools. Researchers who find and responsibly report issues like prompt injection, model theft, data leakage, unauthorized access, and adversarial manipulation may receive bounties. Google also outlines guidelines, scope boundaries, and eligibility rules for submissions, promoting transparency, safety, and collaborative hardening of its AI infrastructure.  https://bughunters.google.com/blog/6116887259840512/announcing-google-s-new-ai-vulnerability-reward-program

Memory Integrity Enforcement: a new era of always-on memory safety for Apple devices

Apple describes the rollout of Memory Integrity Enforcement (MIE) — a deeply integrated hardware-software system combining secure typed memory allocators and Enhanced Memory Tagging Extension (EMTE) in synchronous mode, along with tag confidentiality enforcement — to provide always-on defenses against memory corruption vulnerabilities across critical subsystems (including kernel and userland). The effort spans half a decade of design and collaboration between Apple silicon and OS teams, aiming to block classes of exploits like buffer overflows and use-after-free before they can be chained. In Apple’s evaluation, MIE significantly constrains attacker options and disrupts many contemporary exploit techniques, marking what they call “the most significant upgrade to memory safety in the history of consumer operating systems.”  https://security.apple.com/blog/memory-integrity-enforcement/

Keeping Secrets Out of Logs

The author argues there is no silver bullet to prevent secret leaks in logs, but proposes a number of “lead bullet” controls applied in depth. They first catalog common causes of secret leakage—direct logging, “kitchen sink” objects, configuration changes, embedded secrets, telemetry platforms, and user input. Then they present ten mitigation strategies: designing data architecture to centralize logging, transforming/redacting/tokenizing data, introducing domain primitives that distinguish secrets, using read-once objects, customizing log formatters, reinforcing via unit tests, employing sensitive data scanners, preprocessing log streams, applying taint analysis, and empowering people via training and incentives. The recommended overarching strategy is: lay a foundation (culture, definitions, pipeline), understand secret data flows, protect chokepoints, use defense-in-depth, and plan for detection, response, and recovery.  https://allan.reyes.sh/posts/keeping-secrets-out-of-logs/