Posts

Showing posts from March, 2026

How We Hacked McKinsey's AI Platform

This blog post from CodeWall describes how their autonomous offensive security agent compromised McKinsey & Company's internal AI platform, Lilli, within two hours starting with only the domain name. The agent mapped the attack surface by discovering publicly exposed API documentation with over 200 endpoints, 22 of which lacked authentication. One unprotected endpoint wrote user search queries to the database with safely parameterized values but concatenated JSON keys directly into SQL, creating a SQL injection vulnerability that the agent identified through database error messages. Through fifteen blind iterations, the agent enumerated the production database and gained access to 46.5 million chat messages, 728,000 files, 57,000 user accounts, 384,000 AI assistants, 94,000 workspaces, system prompts and AI model configurations, 3.68 million RAG document chunks representing decades of proprietary McKinsey research, and 1.1 million files flowing through external AI APIs. The age...

hackerbot-claw: An AI-Powered Bot Actively Exploiting GitHub Actions

This blog post from StepSecurity details a week-long automated attack campaign in February and March 2026 where an autonomous AI bot called hackerbot-claw systematically exploited GitHub Actions workflows across major open source repositories, including those belonging to Microsoft, DataDog, CNCF, and popular projects like Trivy and awesome-go. The bot used five different exploitation techniques including poisoned Go scripts via pull_request_target workflows, direct script injection, branch name injection, filename injection with base64 encoded commands, and AI prompt injection targeting Claude Code reviewers through poisoned configuration files. In the most severe incident, the attacker stole a personal access token from the aquasecurity/trivy repository and achieved full repository takeover, making the repository private, deleting years of releases, and pushing a malicious artifact to the Open VSX marketplace. The campaign successfully compromised at least five of seven targeted repo...

A Practical Guide for Secure MCP Server Development

This OWASP resource provides actionable guidance for securing Model Context Protocol servers, which serve as the critical connection point between AI assistants and external tools, APIs, and data sources. It highlights that MCP servers present unique security challenges because they operate with delegated user permissions, use dynamic tool-based architectures, and support chained tool calls, all of which increase the potential impact of a single vulnerability. The guide outlines best practices covering secure architecture design, strong authentication and authorization, strict input and output validation, session isolation, and hardened deployment. It is intended for software architects, platform engineers, and development teams to help them reduce risk while safely enabling tool-integrated agentic AI capabilities.  https://genai.owasp.org/resource/a-practical-guide-for-secure-mcp-server-development/

Introducing DeepViolet

This blog post announces DeepViolet, an open-source TLS and SSL analysis library that has been integrated into the ZAP HTTPS Info add-on to deliver risk assessments alongside connection details. DeepViolet provides a modular API that performs TLS handshake analysis, certificate chain validation, revocation checks, security header inspection, and DNS lookups, returning structured results with a numeric risk score and letter grade. The post walks through a sample scan showing how findings are categorized into protocols and connections, revocation and transparency, security headers, DNS security, certificate details, and cipher suites. The library is structured as a core API available on Maven Central, a standalone Java Swing desktop application for ad-hoc investigations, and a command-line interface for scripting. Planned features include scan persistence, customizable risk scoring with YAML-based rules, user-editable cipher suite evaluations, certificate transparency analysis, and AI-au...

Guided ZAP Scans: Faster CI/CD Feedback Using SAST

This blog post from the Seqra Team introduces an approach that uses static analysis findings to guide ZAP active scans toward the most relevant endpoints, enabling a faster scanning mode suited for CI/CD pipelines. The integration uses dataflow-aware SAST tools like OpenTaint to produce DAST-friendly output containing endpoint paths, HTTP methods, and CWE classifications in SARIF format. A script generates a targeted ZAP configuration with isolated contexts for each CWE category, running CWE-specific scan policies only against endpoints where vulnerabilities were detected. Results are then filtered to retain only findings validated by ZAP. Testing on the OWASP Benchmark showed that this guided approach achieved the same detection accuracy as ZAP Insane strength while sending 87 percent fewer requests and completing scans eight times faster. A GitHub Action automates the workflow with full and differential scanning modes for pull requests, uploading validated vulnerabilities directly to...

ClawGuard: AI Agent Security Scanner

ClawGuard is an open-source security scanner that acts as a firewall for AI agents, detecting prompt injection, jailbreaks, and data exfiltration in real time. It features 216 detection patterns across 13 categories, supports 15 languages, and achieves sub-10 millisecond scan times with an F1 score of 99 percent using pure Python with no external dependencies or API calls. The scanner includes a 10-stage preprocessing pipeline to catch evasion techniques like leetspeak, zero-width characters, homoglyphs, and base64 encoding, along with confidence scoring for each finding. ClawGuard offers a dedicated MCP security scanner for identifying hidden prompt injections in MCP server tool descriptions, an evaluation framework with 262 test cases, CLI and SARIF output for CI/CD integration, and compliance support for EU AI Act articles. The project has been used to responsibly disclose vulnerabilities in over 30 popular MCP servers and AI tools representing more than 280,000 combined GitHub star...

Pipelock

Pipelock is an open-source agent firewall that provides network scanning, process containment, and tool policy enforcement for AI agents through a single binary. It acts as a runtime firewall that sits inline between an agent and the internet, using capability separation where the agent process is network-restricted while Pipelock inspects all traffic through an 11-layer scanner pipeline covering secret exfiltration, DLP scanning with 46 built-in patterns, prompt injection detection, SSRF protection, and bidirectional MCP scanning with tool poisoning detection. It operates in three proxy modes—fetch proxy, forward proxy, and WebSocket proxy—and supports three operational modes: strict allowlist-only for high security, balanced for general use, and audit for monitoring. Additional features include a process sandbox using Landlock and seccomp on Linux, MCP tool policy enforcement with pre-execution rules, tool call chain detection, kill switch mechanisms, response scanning with a six-pas...

Microsoft Agent Governance Toolkit

The Microsoft Agent Governance Toolkit is a runtime governance infrastructure that provides deterministic policy enforcement, zero-trust identity management, execution sandboxing, and reliability engineering for autonomous AI agents. It addresses all 10 OWASP Agentic Top 10 risks through a modular architecture with Python, TypeScript, and .NET SDKs. The toolkit includes a policy engine that evaluates agent actions with sub-millisecond latency, cryptographic identity credentials with trust scoring, a four-tier privilege ring system for execution isolation, and site reliability engineering features like SLOs, error budgets, and circuit breakers. It integrates with over 12 agent frameworks including LangChain, CrewAI, AutoGen, and the Microsoft Agent Framework, supports OPA and Cedar policies, and provides compliance alignment with regulations like the EU AI Act and Colorado AI Act. The project is under an MIT license with Microsoft-signed public preview releases. https://github.com/micro...

OWASP Agentic Skills Top 10

The OWASP Agentic Skills Top 10 documents the most critical security risks in AI agent skills across platforms like OpenClaw, Claude Code, Cursor, and VS Code. It addresses the security of the behavioral layer where skills define how agents orchestrate multi-step workflows, filling a gap between model-level risks and protocol-level risks. The project is based on extensive real-world evidence from 2026 incidents, including the ClawHavoc campaign with over 1,180 malicious skills, widespread credential exposure, and critical vulnerabilities in major platforms. The top risks include malicious skills, supply chain compromise, over-privileged skills, insecure metadata, unsafe deserialization, weak isolation, update drift, poor scanning, lack of governance, and cross-platform reuse. The project provides detailed risk descriptions, attack scenarios, preventive mitigations, mappings to existing OWASP projects, a proposed universal skill format, and practical guidance for security teams, skill d...

OWASP AI Security Landscape

The OWASP AI Security Landscape is an interactive visualization tool that maps and organizes OWASP’s artificial intelligence and machine learning security resources. It presents a structured overview of key guides, standards, cheat sheets, tools, projects, and initiatives related to AI security, allowing users to filter by type and explore connections through a visual graph interface. The landscape covers major frameworks such as the OWASP AI Security and Privacy Guide, the Generative AI Top 10, the AI Security Verification Standard (AISVS), and the AI Exchange, along with resources focused on threat modeling, testing, governance, and specific domains like agentic AI, MCP security, and adversarial robustness. It serves as a centralized reference for professionals seeking to navigate the growing ecosystem of OWASP AI security knowledge.  https://ricokomenda.github.io/owasp-ai-security-visualizer/

ClawGuard: AI Agent Security Scanner

ClawGuard is an open-source security scanner designed to act as a firewall for AI agents, detecting threats like prompt injection, jailbreaks, and data exfiltration in real time. It features 216 detection patterns across 13 categories, supports 15 languages, and achieves sub-10 millisecond scan times with an F1 score of 99 percent. The tool includes a first-of-its-kind MCP security scanner for identifying hidden injections in MCP server tool descriptions, along with a 10-stage preprocessing pipeline to resist common evasion techniques like leetspeak, zero-width characters, and base64 encoding. It provides confidence scoring, a benchmarking framework, CLI and SARIF output for CI/CD integration, and helps with EU AI Act compliance. The project has been used to responsibly disclose vulnerabilities in over 30 popular MCP servers and AI tools, and is released under the MIT License.  https://github.com/joergmichno/clawguard

OWASP Vulnerable Web Applications Directory

The OWASP Vulnerable Web Applications Directory is a production project that maintains a comprehensive registry of known vulnerable web and mobile applications intended for legal security testing and training. It features a searchable and filterable directory currently listing 191 applications, ranging from widely used tools like Damn Vulnerable Web Application and OWASP Juice Shop to more specialized labs covering cloud, Kubernetes, API, and AI security. Users can browse by collection, technology, category, and other criteria, with each entry noting GitHub stars and contribution recency. The directory supports security professionals in finding realistic, intentionally flawed environments for practice, tool evaluation, and education.  https://vwad.owasp.org/

Claude Code Hardening Cheatsheet

This repository provides a practical cheatsheet and configuration samples for securely running Claude Code, focusing on sandbox settings, permission policies, and custom hooks. It is designed for progressive adoption, offering safe defaults for beginners and advanced fine-tuning options for experienced users. The included files feature a detailed cheatsheet in both Japanese and English, along with a commented settings.json template with allow, ask, and deny rule examples. The guidance covers the sandbox, least privilege principles, and defense in depth, while noting that platform-specific rules are primarily tested on macOS. The project references OWASP GenAI security resources and is available under a CC BY-SA 4.0 license.  https://github.com/okdt/claude-code-hardening-cheatsheet/blob/main/README.en.md

DockSec: AI-Powered Docker Security Scanner

DockSec is an OWASP Incubator Project that combines traditional Docker security scanners like Trivy, Hadolint, and Docker Scout with artificial intelligence to provide context-aware security analysis for containers. It moves beyond simply listing vulnerabilities by using AI to prioritize critical issues, explain risks in plain language, and suggest specific fixes tailored to a user's Dockerfile. The tool offers a simple installation via pip, supports multiple large language model providers including OpenAI, Anthropic, Google Gemini, and local models through Ollama, and can function entirely offline using a scan-only mode. Users can scan Dockerfiles and images, receive a security score, and generate reports in formats like PDF, HTML, or JSON, making it suitable for both development workflows and CI/CD pipelines.  https://owasp.org/www-project-docksec/

Google Patches High‑Severity Gemini AI Panel Hijack Bug in Chrome

A high‑severity security flaw in Google Chrome’s integration of the Gemini AI side panel (tracked as CVE‑2026‑0628) could have allowed malicious browser extensions with only basic permissions to hijack the privileged Gemini Live interface to inject code, escalate privileges, violate user privacy, and access sensitive resources like cameras, microphones, local files, and screenshots. The issue stemmed from improper boundary enforcement in the extension API as applied to the AI panel. Researchers from Palo Alto Networks’ Unit 42 responsibly disclosed the vulnerability and Google released a patch in early January 2026. The incident highlights new attack surfaces introduced by deeply embedded AI features in browsers and the need for stronger in‑browser policy enforcement and real‑time monitoring.  https://www.darkreading.com/endpoint-security/bug-google-gemini-ai-panel-hijacking

ENISA Draft “Secure by Design and Default Playbook” Offers Practical Guidance for Embedding Security Throughout a Product’s Lifecycle

ENISA’s March 2026 draft playbook provides a detailed, practical guide aimed particularly at SMEs for applying “security by design” and “security by default” principles across a product’s full lifecycle — from concept through development, deployment, maintenance, and decommissioning. It explains architectural and operational security foundations, lists concrete playbook actions (like threat modeling, least privilege, attack‑surface reduction, secure coding, monitoring, and supply chain controls), and even suggests machine‑readable security attestation. The draft connects these principles with obligations in regulations such as the EU Cyber Resilience Act, helping teams operationalize security rather than treat it as an afterthought.  https://www.enisa.europa.eu/sites/default/files/2026-03/ENISA_Secure_By_Design_and_Default_Playbook_v0.4_draft_for_consultation.pdf

SBOMs Are Shifting From Best Practice to Legal Obligation, Says InfoQ

In a report from InfoQ covering Viktor Petersson’s talk at QCon London 2026, he warned that Software Bills of Materials (SBOMs) are no longer just a security best practice but are rapidly becoming mandatory due to emerging regulations like the EU Cyber Resilience Act, U.S. Executive Order 14028, FDA device rules, and PCI‑DSS requirements. Petersson explained practical details on generating high‑quality SBOMs, differences between dominant formats like SPDX and CycloneDX, the importance of signing and lifecycle‑managing SBOM artifacts in CI/CD pipelines, and common pitfalls such as merging disparate SBOMs or skipping signing. He stressed teams must treat SBOMs as managed engineering artifacts rather than ad‑hoc documents if they want to meet upcoming compliance windows and improve software supply‑chain transparency. https://www.infoq.com/news/2026/03/sbom-viktor-petersson/

Sonatype Says Guardrails Are Key to Safer AI‑Generated Code

The VMblog interview with Sonatype’s Paul Horton explains that while AI coding assistants boost speed, they frequently recommend nonexistent, insecure, or malicious open‑source packages, creating “security debt” in modern development workflows. Sonatype’s approach uses real‑time open source intelligence and intelligent guardrails to steer AI tools toward safe, high‑quality dependencies and catch threats faster than traditional sources like the NVD, helping teams balance velocity with robust supply‑chain security.  https://vmblog.com/video/sonatype-keeping-ai-generated-code-out-of-the-gutter-with-intelligent-open-source-guardrails/

Autogenerated IETF CRIT Spec Draft Defines Templates for Cloud Resource Vulnerability Identification

The Vulnetix/ietf-crit-spec repository hosts a draft of the Cloud Resource Identifier Templates (CRIT) specification — an Internet‑Draft submitted through the IETF that defines a machine‑readable template format for describing how cloud infrastructure resources (like AWS ARNs or GCP resource names) are affected by known vulnerabilities. Instead of relying on static package identifiers, CRIT templates include parameterized slots to capture provider‑specific values, remediation semantics, and detection metadata so tools can determine exposure and drive automated remediation workflows for cloud‑native assets.  https://github.com/Vulnetix/ietf-crit-spec

CISOs Clash Over Whether Humans Still Belong in AI‑Driven Security

At the RSAC 2026 Conference, security leaders debated the role of human oversight in AI‑powered security, questioning whether keeping a “human in the loop” slows down defenses or remains essential for safe deployment of AI systems. Panelists from major companies discussed balancing speed and automation with safety and control as organizations rapidly adopt AI tools. The conversation reflects a broader industry tension between maximizing AI’s efficiency and retaining human judgment to manage risks and uncertainties in real‑world environments.  https://www.darkreading.com/application-security/cisos-debate-human-role-ai-powered-security

How to Create a Custom Security Configuration in GitHub to Standardize Protections

The GitHub documentation explains how organization owners or admins can build a custom security configuration when recommended defaults don’t meet their needs. A custom configuration lets teams define exactly which security features — such as secret scanning, code scanning, dependency scanning, and push protection — are enabled, disabled, or inherited for repositories across an organization. It also allows naming the configuration, choosing repository visibility or enforcement policies, and then saving it so it can be applied consistently to one or more repos to ensure tailored security coverage.  https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/establish-complete-coverage/creating-a-custom-security-configuration

Autonoma CLI Automatically Detects and Safely Fixes Hardcoded Secrets in Python

The GitHub project VihaanInnovations/autonoma is an open‑source Python command‑line tool focused on code security by detecting hardcoded secrets and applying safe, deterministic fixes. It uses AST‑based analysis to find credentials like passwords and API keys in code, and when it can guarantee a safe transformation, it replaces them with environment‑variable lookups; if not, it refuses to make changes to avoid breaking logic. The tool runs locally without telemetry, supports CI integration and history scanning, and deliberately avoids unsafe or ambiguous modifications to ensure developers get reliable remediation rather than just alerts. https://github.com/VihaanInnovations/autonoma

TeamPCP Backdoors Telnyx PyPI Package Days After LiteLLM Breach

A threat actor known as TeamPCP compromised the Python Package Index (PyPI) “telnyx” SDK less than three days after a previous compromised package incident, publishing versions 4.87.1 and 4.87.2 with malicious backdoors that weren’t in the official repository. The first attempt failed due to a typo, but the fixed release executed payloads that drop persistent malware on Windows or a credential stealer on Linux/macOS. The malware harvests SSH keys, cloud tokens, config files, Kubernetes tokens, and more, exfiltrating it to a command‑and‑control server. Analysis shows reuse of the same cryptographic key and techniques from the earlier LiteLLM compromise, suggesting a linked campaign that targets software supply chain trust in widely used open source packages and can lead to full environment compromise unless credentials are rotated and systems checked. https://www.endorlabs.com/learn/teampcp-strikes-again-telnyx-compromised-three-days-after-litellm

AI Is Transforming Application Security and Modern SOC Operations

The BankInfoSecurity article explains how large language models and AI are reshaping both application security and security operations centers by automating tasks like code generation and large‑scale log analysis, reducing reliance on traditional manual tools and boosting efficiency. Experts say AI accelerates detection and pattern recognition while highlighting the value of adding contextual intelligence and governance around model outputs. Startups that supply context and asset inventories can fill gaps that basic models alone cannot. https://www.bankinfosecurity.com/how-ai-reshaping-application-security-soc-a-31192

File Uploads Remain a Persistent Security Blind Spot — And How Devs Are Addressing It

Many web applications still treat file uploads as low‑risk convenience features, but attackers regularly exploit them to deliver malware, execute code, and bypass defenses. The article explains why uploads are a blind spot — developers often skip content inspection, rely on weak filename checks, and trust client‑side validation — and outlines stronger practices. Effective defenses include enforcing strict type and size checks server‑side, using content‑based validation (e.g., magic‑byte inspection), sandboxing processing, separating upload storage from application logic, and employing security‑oriented libraries and services. Together these measures reduce the attack surface and harden upload handling against real threats.  https://programminginsider.com/why-file-uploads-are-still-a-security-blind-spot-and-how-developers-are-fixing-it/

Semgrep Pushes AI‑Driven AppSec Workflows to Align Security with Modern Development

Semgrep is spotlighting an AI‑driven application security workflow aimed at bridging the gap between fast software development and effective security controls. The company’s recent messaging highlights combining AI and traditional static analysis to reduce noise, embed security checks directly into developer workflows like pull requests, and triage vulnerabilities contextually, with an emphasis on scaling AppSec for environments where AI‑generated code is increasingly common. This strategy underscores a product focus on efficiency and developer‑centric security tooling that could strengthen Semgrep’s position in the evolving DevSecOps market. https://www.tipranks.com/news/private-companies/semgrep-highlights-ai-driven-application-security-workflow

Security Debt Escalates Into a Major CISO Governance Challenge

A new industry report shows that “security debt” — known vulnerabilities unresolved for more than a year — is widespread and growing, with 82 % of organizations carrying long‑lived flaws and a rising share of critical issues, according to data from Veracode’s 2026 State of Software Security. Remediation timelines remain long, with median fix times around 243 days, and third‑party dependency debt persists. Experts say CISOs need to treat security debt like financial debt with board‑level KPIs, stronger governance, automated fixes, and prioritization tied to business risk to reduce accumulated exposure.  https://www.helpnetsecurity.com/2026/03/02/ciso-security-debt-report/

Claude Code Security Sends Shockwaves Through Cybersecurity

The article discusses rising concerns in cybersecurity triggered by “Claude code security” risks—security issues and vulnerabilities emerging from generative AI systems like Claude that produce code. It highlights how AI‑generated code can introduce bugs, insecure patterns, and dependency problems at scale, pressuring traditional security practices. Organizations are urged to integrate AI‑aware tooling, guardrails, and testing into development lifecycles, and to rethink training, auditing, and governance to manage AI‑specific risk. The piece frames AI code generation as a double‑edged sword that accelerates productivity while expanding the attack surface.  https://securityboulevard.com/2026/03/claude-code-security-the-ai-shockwave-hitting-cybersecurity/

IBM Launches Bob 1.0 AI Coding Co‑Pilot for IBM i Developers

IBM is rolling out Bob 1.0.0, a long‑awaited AI‑powered coding co‑pilot aimed at its IBM i installed base, with general availability planned for March 24, 2026. Bob combines IBM’s own Granite models with external LLMs like Anthropic’s Claude, Meta’s Llama, and Mistral to assist developers working in RPG, CL, SQL, COBOL, Java, and Python. Delivered as a Visual Studio Code plug‑in, it can explain, refactor, generate, test, and modernize code and understands IBM i specifics, including performance and security considerations. The product is offered in multiple subscription tiers with usage‑based “bobcoins” for resource consumption. https://www.itjungle.com/2026/03/02/ibm-gets-bob-1-0-off-the-ground/

Google Outlines Shrinking Quantum Threat to RSA and Urges Post‑Quantum Transition

In a May 2025 blog post, Google researchers explain that ongoing improvements in quantum algorithms and error correction have sharply lowered estimates of the resources needed to factor RSA‑2048 keys with a quantum computer—potentially to around one million noisy qubits running for a week, down from prior estimates in the tens of millions. They emphasize that while practical quantum computers are still far from this scale, tracking these costs helps plan migration to post‑quantum cryptography (PQC) standards already published by NIST. The post stresses the importance of accelerating PQC adoption before large‑scale quantum threats materialize and discusses how “store now, decrypt later” attacks heighten urgency, particularly for asymmetric encryption and long‑lived signature keys, and notes work on PQC signatures in Cloud KMS.  https://security.googleblog.com/2025/05/tracking-cost-of-quantum-factori.html

HTTPS Certificate Industry to Sunset Weak Domain Validation Methods

Google’s Chrome Root Program and the CA/Browser Forum are phasing out 11 legacy domain control validation methods for HTTPS certificates that rely on weak signals like email, phone, SMS, fax, or postal mail in favor of stronger, automated cryptographically verifiable checks. The change, driven by Ballots SC‑080, SC‑090, and SC‑091, is designed to close loopholes attackers could exploit to fraudulently obtain certificates. The deprecation will be phased in with full security benefits realized by March 2028, pushing the web toward more secure validation methods and improved trust in HTTPS connections.  https://security.googleblog.com/2025/12/https-certificate-industry-phasing-out.html

FlowStrider Automates Continuous Data‑Flow Threat Modeling

FlowStrider is an open‑source architectural threat modeling tool developed to automate and streamline the identification, mitigation, documentation, and management of security threats based on data flow representations of software systems. It supports continuous threat modeling by integrating into CI/CD pipelines, is fully scriptable and extensible, and works with practice‑oriented workflows to lower the effort required for threat analysis. The tool is language‑agnostic, uses structured data‑flow graphs to elicit threats, and produces structured reports to aid security assessment in development workflows.  https://gitlab.com/dlr-dw/automated-threat-modeling/flowstrider

Google’s 2029 Quantum Deadline Is a Wake‑Up Call

Google has publicly moved up its internal timeline to complete the transition to post‑quantum cryptography to 2029, ahead of deadlines set by U.S. standards bodies like NIST and the NSA, and wants enterprise security teams to treat quantum‑safe migration as a near‑term priority. The shift is motivated by concerns about “harvest now, decrypt later” risks and the vulnerability of digital signatures once quantum computers can break current encryption, signaling to IT leaders that they should accelerate planning and adoption of quantum‑resistant algorithms across systems. https://www.govinfosecurity.com/googles-2029-quantum-deadline-wake-up-call-a-31247

The Hidden Cost of Cybersecurity Specialization Erodes Holistic Risk Insight

The article highlights a subtle but significant problem in modern cybersecurity: intense specialization can weaken teams’ foundational understanding of risk and systems. When practitioners focus narrowly on specific domains like cloud security or IAM without broader context, organizations lose end‑to‑end visibility into threats and defenses. This gap makes it harder to prioritize risks, choose appropriate tools, and communicate security concerns in business terms. Ultimately, deep technical expertise alone can fail without a shared, holistic view of how risks and systems interconnect. https://thehackernews.com/2026/03/the-hidden-cost-of-cybersecurity.html

Trivy Supply Chain Breach Hijacks GitHub Actions to Steal CI/CD Secrets

Trivy, a widely used open‑source vulnerability scanner maintained by Aqua Security, was compromised again in March 2026 when attackers hijacked 75 version tags of its associated GitHub Actions workflows to distribute malicious code that steals sensitive CI/CD secrets from developer environments. The breach involved force‑pushing malicious commits into trusted action tags, exposing SSH keys, cloud credentials, and other secrets to attackers. This second supply‑chain incident underscores risks in CI/CD tooling and the need for stricter workflow security practices. https://thehackernews.com/2026/03/trivy-security-scanner-github-actions.html

When Stronger Crypto Breaks Auth: The FreshRSS Bcrypt Truncation Bypass

The article explains an authentication bypass in FreshRSS caused by an unintended interaction between a longer SHA-256 nonce and bcrypt’s 72-byte input limit. Because bcrypt truncates input beyond 72 bytes, the system ended up hashing only non–password-dependent data, allowing any password to succeed. The issue came from a well-intentioned crypto “upgrade” that increased nonce length, breaking assumptions in a custom challenge-response flow. The fix was simply reordering inputs so the password-dependent hash is included. The case highlights how combining secure primitives incorrectly can introduce critical vulnerabilities. https://pentesterlab.com/blog/freshrss-bcrypt-truncation-auth-bypass

AI Remediation That Actually Helps Developers Fix the Right Problems

The article argues that most vulnerability remediation tools fail because they lack context, forcing developers to spend hours figuring out fixes themselves. Maze proposes AI agents that investigate each vulnerability end-to-end: tracing its origin, evaluating exploitability, and recommending verified fixes tailored to the specific environment. Instead of generic upgrades, the system provides multiple options with trade-offs and confirms they work before suggesting them. It also offers mitigations when patching isn’t feasible, shifting remediation from guesswork to evidence-based, developer-friendly workflows.   https://mazehq.com/blog/ai-remediation-developers-actually-want-to-use

Augustus Brings Automated Adversarial Testing to LLM Security

Augustus is an open-source tool by Praetorian designed to test the security and robustness of large language models through automated adversarial probing. Built in Go, it provides a modular framework where “probes” simulate attacks such as prompt injection, data extraction, encoding bypasses, and agent manipulation. The system uses standardized interfaces for extensibility and organizes attacks via registries and detectors, enabling scalable testing workflows. Overall, it helps security teams systematically evaluate how well LLMs resist real-world attack techniques rather than relying solely on alignment or safety training.  https://github.com/praetorian-inc/augustus

AIUC-1 Introduces a SOC 2-Like Standard for Trustworthy AI Agents

AIUC-1 presents itself as the first dedicated standard and certification for AI agents, aimed at enabling enterprise adoption through measurable trust. It defines requirements across key risk areas such as security, safety, reliability, data privacy, accountability, and societal impact. The framework combines technical testing, operational controls, and legal practices, with ongoing quarterly assessments and annual recertification. Built with input from industry and research leaders, it operationalizes broader frameworks into concrete controls, giving organizations a way to evaluate and prove that their AI systems behave safely and reliably in real-world conditions. https://www.aiuc-1.com/

CPE Guesser 2.0 Expands Beyond NVD with Smarter Matching and Multi-Source Data

CPE Guesser 2.0 introduces major improvements to how Common Platform Enumeration data is imported, ranked, and used. It removes reliance on the NVD as the sole source by supporting additional datasets like Vulnerability-Lookup dumps, increasing flexibility and autonomy. The release enhances search accuracy with better ranking logic, improves handling of CVE v5 data, and speeds up imports through parallelization and refactored pipelines. It also strengthens configuration, Docker deployment, and overall robustness, positioning the tool as a more scalable and modern solution for vulnerability management workflows. https://www.vulnerability-lookup.org/2026/03/22/cpe-guesser-2.0-released/