Posts

Using LLMs as Assistants, Not Replacements, in Secure Code Reviews

The post explains how tools like Claude Code can significantly accelerate secure code reviews by helping analysts understand unfamiliar codebases, map logic flows, and highlight potential security hotspots. However, it emphasizes that LLMs should be used as a support tool—not relied on to automatically find vulnerabilities—since naive use leads to many false positives. A structured approach with tailored prompts produces more useful insights, while keeping human validation central. It also highlights operational concerns like protecting sensitive code by running models in controlled environments. https://specterops.io/blog/2026/03/26/leveling-up-secure-code-reviews-with-claude-code

Automated API Authorization Testing for Modern Security Assessments

Hadrian is an open-source offensive security tool focused on detecting authorization vulnerabilities in APIs, such as broken object and function-level access controls. It uses role-based testing and customizable templates to systematically explore how different users can interact with REST, GraphQL, and gRPC endpoints. Designed for pentesters and security teams, it automates what is typically a manual process, integrates into broader testing workflows, and helps validate real exploitability rather than just flagging potential issues. https://github.com/praetorian-inc/hadrian

Practical Guide to Securing npm Dependencies and Supply Chains

This repository is a curated guide of security best practices for working with npm, focused on reducing risks from supply chain attacks and vulnerable dependencies. It covers techniques like disabling risky install scripts, enforcing deterministic installs, auditing packages before use, delaying adoption of new releases, and avoiding blind upgrades. It also includes guidance for developers and maintainers, such as using 2FA, minimizing dependencies, and adopting secure publishing methods, aiming to make JavaScript development more resilient to increasingly common package ecosystem attacks.  https://github.com/lirantal/npm-security-best-practices

AI-Powered Tool to Detect Sensitive Data in Public URLs

The Salesforce URL Content Auditor is an open-source security tool that scans publicly accessible URLs to identify exposed sensitive information. It downloads and analyzes content such as images, PDFs, and videos using AI to detect potential data leaks, privacy risks, and compliance violations. Designed for proactive security, it helps organizations audit external-facing content, support incident response, and integrate continuous monitoring into workflows to prevent unintended data exposure.  https://github.com/salesforce/url-content-auditor

Scaling Vulnerability Management with AI: What Actually Works

The article describes how Synthesia built an AI-driven vulnerability management system to handle overwhelming volumes of security findings from SAST and SCA tools. The key approach is aggressive automation: filtering noise (stale code, low-risk issues, false positives) so only meaningful findings become tickets. AI agents then validate vulnerabilities using consensus-based analysis and automatically generate fixes as pull requests, shifting developers from writing fixes to reviewing them. This system drastically reduced backlog and manual effort—only a small fraction of issues require human review—allowing security teams to focus on high-impact risks while accelerating remediation https://www.synthesia.io/post/scaling-vulnerability-management-with-ai-what-actually-worked

VulnVibes: AI Agent for Context-Aware Vulnerability Triage

The article introduces VulnVibes, an experimental AI security agent designed to analyze GitHub pull requests with full architectural context rather than isolated code scanning. Unlike traditional SAST tools, it reasons across multiple repositories, infrastructure configs, and service interactions to determine whether a vulnerability is actually exploitable. It works in two stages: fast threat modeling to filter relevant changes, followed by deep investigation that traces attack paths across services, configs, and environments. The system produces structured verdicts with reasoning, confidence, and risk levels. The key insight is that real security issues often emerge from system-level interactions, not single files, and effective AI tooling must replicate how human engineers analyze entire systems, not just code snippets. https://www.anshuman.ai/posts/vulnvibes-intro

Why Mutational Grammar Fuzzing Can Mislead Bug Discovery

The article explains mutational grammar fuzzing, a technique that generates structured test inputs by mutating data while preserving grammar rules, making it effective for testing complex parsers and languages.  However, it argues the approach has important flaws. Coverage-guided fuzzing can prioritize inputs that increase code coverage without actually finding more bugs, leading to misleading results. Grammar constraints can also limit exploration, preventing the fuzzer from reaching unexpected or invalid states where vulnerabilities often exist. The author proposes simple mitigation strategies, emphasizing that fuzzing effectiveness depends less on structure-awareness alone and more on balancing coverage, mutation diversity, and exploration beyond strict grammar boundaries.  https://projectzero.google/2026/03/mutational-grammar-fuzzing.html