Posts

Showing posts from August, 2025

Finding More Zero Days Through Variant Analysis

Semgrep's blog post, "Finding More Zero Days Through Variant Analysis," authored by Eugene Lim, delves into leveraging root cause analysis of known vulnerabilities to identify similar flaws within the same codebase. By examining patch diffs and CVE advisories, researchers can pinpoint recurring coding patterns that may lead to multiple vulnerabilities. This approach enables the creation of targeted Semgrep rules to detect these variants, enhancing the efficiency of vulnerability discovery. Lim illustrates this method by analyzing integer overflow vulnerabilities in the Expat XML parsing library, demonstrating how understanding the underlying cause can facilitate the identification of related issues.   https://semgrep.dev/blog/2025/finding-more-zero-days-through-variant-analysis/

Phishing Emails Now Target Users and AI Defenses

A recent phishing campaign has evolved to not only deceive users but also manipulate AI-based defenses. The attackers employed a "password expiry notice" as a lure, directing recipients to a Gmail-themed login page. Hidden within the email's plain-text MIME section was an AI prompt designed to confuse automated analysis systems, potentially leading them to misclassify the phishing attempt. This dual-layer strategy targets both human users and AI defenses, highlighting the increasing sophistication of phishing tactics.   https://malwr-analysis.com/2025/08/24/phishing-emails-are-now-aimed-at-users-and-ai-defenses

Nx Build System Package Compromised with Data-Stealing Malware

On August 26, 2025, the popular Nx build system package on npm was compromised with data-stealing malware. The malicious versions were live for just over five hours, potentially affecting thousands of developers. The malware targeted SSH keys, npm tokens, and .gitconfig files, and even leveraged AI CLI tools like Claude, Gemini, and q for reconnaissance and data exfiltration. The attack originated from a compromised maintainer account via a leaked token, with a secondary wave exploiting stolen credentials to expose private repositories. Immediate remediation includes securing repositories, isolating affected users, and revoking access tokens, while developers are advised to check for compromised versions and strengthen supply chain security. https://www.stepsecurity.io/blog/supply-chain-security-alert-popular-nx-build-system-package-compromised-with-data-stealing-malware

Monitoring MCP Traffic Using eBPF: Part 1

 In the first installment of his series, Alex Ilgayev introduces MCPSpy, an open-source tool designed to monitor Model Context Protocol (MCP) traffic. MCP is an emerging standard that enables AI applications to communicate with external tools and data sources. Ilgayev discusses the motivations behind developing MCPSpy, the choice of eBPF for monitoring, and the tool's initial implementation. He also outlines the limitations of the current version and hints at future developments, such as inspecting encrypted HTTPS-based MCP communications over TLS. The article emphasizes the importance of visibility in securing AI-driven tools and sets the stage for deeper exploration in subsequent parts of the series. https://blog.alexil.me/monitoring-mcp-traffic-using-ebpf-part-1-c445b76377cf

Betting Against the Models: Rethinking AI Security Strategies

In his recent article, Shrivu Shankar critiques the emerging cybersecurity market focused on "Security for AI" startups, arguing that many are built on a flawed premise: betting against the rapid evolution of foundational AI models. He identifies two main predictions that he believes are misguided The first is the notion that companies can build durable businesses by patching the current, transient weaknesses of foundational models. Shankar points out that defense is highly centralized around a few foundational model providers, and third-party tools will face an unwinnable battle against a constantly moving baseline, leading to a rising tide of false positives. He suggests that the market for patching model flaws is a short-term arbitrage opportunity, not a long-term investment. The second prediction is that AI agents can be governed with the same restrictive principles used for traditional software. Shankar argues that an agent's utility is directly proportional to the...

Lessons from Zscaler Founder Jay Chaudhry

 In this episode of Inside the Network, Jay Chaudhry, founder and CEO of Zscaler, shares his journey from growing up in a remote Indian village to building one of the world’s most valuable cybersecurity companies. He discusses launching Zscaler in 2007 with his own capital, pioneering the Zero Trust cloud security model, and overcoming early skepticism about cloud-based enterprise security. Jay offers insights on founder mindset, the importance of focus and alignment, early go-to-market strategies, knowing when to pivot, and the future of cybersecurity, including the decline of private networks and traditional firewalls. The episode highlights his principles of conviction, humility, and disciplined long-term thinking for building enduring companies. https://insidethenetwork.co/episodes/jay-chaudhry-betting-on-yourself-and-building-a-40b-zero-trust-giant-in-zscaler

Gartner Got Shift Left Wrong

In this article, Tony Turner critiques Gartner's interpretation of the "shift-left" approach in software development and security. He argues that Gartner's perspective may oversimplify the complexities involved in integrating security practices earlier in the development lifecycle. Turner emphasizes that while shifting left aims to identify and address security issues sooner, it requires a nuanced understanding of the development process and the appropriate tools and methodologies. He suggests that merely adopting a shift-left strategy without proper implementation can lead to challenges such as increased developer workload, potential burnout, and the risk of overlooking critical security concerns. Turner advocates for a balanced approach that combines early integration of security practices with ongoing collaboration between development and security teams to ensure effective risk management. https://www.linkedin.com/pulse/gartner-got-shift-left-wrong-tony-turner-0r...

Passing the Security Vibe Check – The Hidden Risks of Vibe Coding

Databricks’ AI Red Team highlights the risks of “vibe coding,” where developers use generative AI to quickly scaffold code with minimal guidance. While convenient, this approach often produces insecure code, including issues like arbitrary code execution through unsafe deserialization and memory corruption from improper handling of binary data. The team shows that structured prompting strategies—such as applying security-focused prompts, language-specific guidance, and self-reflection loops—can significantly lower vulnerability rates. Testing demonstrated that self-reflection prompts in particular reduced insecure outputs by about half without substantially harming code quality.  https://www.databricks.com/blog/passing-security-vibe-check-dangers-vibe-coding

Critical Takeaways from Black Hat and DEF CON Beyond the Hype

The New Stack article highlights that the Black Hat and DEF CON conferences showcase groundbreaking tools and research, but it’s essential for practitioners to sift through vendor noise to identify genuinely useful advancements. A major theme is the rising attack surface driven by increasingly complex cloud and container orchestration environments, especially in microservices and serverless architectures. Security must be deeply integrated into DevOps pipelines—automated security checks embedded in CI/CD are becoming fundamental rather than optional. The article also emphasizes the value of community collaboration; sharing experiences and encouraging continuous learning among DevOps and security professionals helps maintain strong defenses amid evolving threats.  https://thenewstack.io/beyond-the-hype-critical-takeaways-from-blackhat-and-defcon

Auth0 Detection Catalog for Proactive Security

Okta introduced the Auth0 Customer Detection Catalog, an open-source set of detection rules that helps organizations proactively identify threats through Auth0 logs. The rules are written in Sigma format, making them adaptable to different SIEM tools. Each detection provides context, such as threat descriptions and recommended responses, enabling faster analyst action. Regular updates and community contributions ensure coverage of new risks. The catalog supports administrators, developers, DevOps, and security analysts in improving monitoring, detecting misconfigurations, and defending against account takeover attempts.  https://sec.okta.com/articles/2025/08/auth0-detection-catalog

CISA Seeks Input on SBOM Update to Tackle Real-World Gaps

The Cybersecurity and Infrastructure Security Agency released a draft update to its Software Bill of Materials (SBOM) minimum elements guidance and is inviting public feedback from now through October 3, 2025. The updated draft introduces four new data fields—component hash, license information, tool name, and generation context—to make SBOMs more practical for automated use across vulnerability management, supply chain security, and operational defenses. It also refines core components like the software producer, component version, and dependency relationships to better align with how SBOMs are generated and used in the field. The guidance aims to foster standardization, improve data quality, and help SBOMs transition from abstract ideals into actionable tools for real-world security operations.  https://www.govinfosecurity.com/cisa-seeks-input-on-sbom-update-to-tackle-real-world-gaps-a-29280

A Simple PSQL MCP Server’s SQL Injection: Bypassing Read-Only Safeguards

In a recent post, the author reveals a serious vulnerability in a Python-based Model Context Protocol (MCP) server designed to provide AI agents with database access in read-only mode. Despite its intention to restrict operations to harmless SELECT statements, the implementation suffers from naïve input handling that fails to enforce proper access control. Because PostgreSQL allows multiple SQL statements separated by semicolons, an attacker can sneak in commands like “COMMIT; DROP SCHEMA public CASCADE;” to terminate the read-only transaction and execute dangerous write operations. This exploit cleanly bypasses the intended safety measures. The underlying lesson: relying solely on superficial input filtering is dangerously inadequate when interfacing with PostgreSQL, and proper access control mechanisms—beyond just filtering—are absolutely essential.  https://www.nodejs-security.com/blog/how-to-bypass-access-control-in-postgresql-in-simple-psql-mcp-server-for-sql-injection/

Our First Outage from LLM-Written Code

The Sketch team shared how a series of outages in July 2025 were caused by a subtle bug introduced by code refactored with the help of a large language model. After deployment, the system worked normally at first but soon suffered from CPU spikes and slowdowns, with the problem oddly triggered whenever the CEO logged in. In the process of diagnosing, they temporarily blocked the CEO’s account, which seemed to solve the issue until it happened again. The root cause was traced to a small change during an automated file move: a break statement had been replaced with continue, creating an infinite loop. This seemingly minor alteration slipped past human review, buried among otherwise harmless changes. To address it, the team improved their agent to preserve code exactly during moves and suggested that better tooling, such as cross-hunk change detection in Git, could help catch similar issues in the future.  https://sketch.dev/blog/our-first-outage-from-llm-written-code

Subverting AIOps Systems Through Poisoned Input Data

Bruce Schneier highlights a groundbreaking security study exposing how AI-driven IT operations tools—known as AIOps—can be manipulated through tainted telemetry. Researchers reveal that autonomous agents relying on logs, performance metrics, and alerts can be tricked by fabricated data into executing harmful actions, such as downgrading software to vulnerable versions. Their attack framework, aptly named AIOpsDoom , uses reconnaissance, fuzzing, and AI-generated adversarial inputs to automatically influence agent behavior without needing prior knowledge of the target system. As a defense, they propose AIOpsShield , a mechanism that sanitizes incoming telemetry by leveraging its structured format and minimizing reliance on user-generated content. Tests show it effectively blocks such attacks without degrading system functionality. This work serves as a critical warning: even systems designed to automate IT resilience can become a weak point if data integrity isn't safeguarded. https...

A Fuzzy Escape: From Fuzzing to VM Breakouts

In the blog post “A Fuzzy Escape: A tale of vulnerability research on hypervisors,” Google Bug Hunters recount the intensive—and often messy—journey of uncovering a virtual machine escape flaw. The narrative walks through the iterative process of designing and refining fuzzing techniques, supplemented by static analysis, to probe hypervisor behavior. The researchers detail how initial fuzzing revealed unexpected behaviors, how they adjusted fuzz targets and instrumentation accordingly, and how a combination of persistence, tooling innovation, and deep dives into debugging eventually uncovered a serious vulnerability that could allow code execution outside a guest VM’s isolation boundary.   https://bughunters.google.com/blog/5800341475819520/a-fuzzy-escape-a-tale-of-vulnerability-research-on-hypervisors

Takeaways from BlackHat and DEFCON

At BlackHat and DEFCON, experts stressed that security must be tightly integrated into DevOps workflows rather than treated as an afterthought. With cloud-native systems, microservices, and serverless architectures increasing complexity, the attack surface continues to expand, making continuous, adaptive defenses essential. Automation was highlighted as a powerful way to catch vulnerabilities early in CI/CD pipelines, but attendees cautioned against being swayed by hype and urged careful selection of effective tools. Finally, both conferences reinforced the importance of community collaboration, with knowledge-sharing seen as a crucial way to strengthen resilience against evolving threats.  https://thenewstack.io/beyond-the-hype-critical-takeaways-from-blackhat-and-defcon/

AI generated code remains highly insecure

The article explains that while large language models have greatly improved in producing syntactically correct code, with over 90 percent of outputs compiling without errors, security has not kept pace. Only about 55 percent of AI generated code passes vulnerability scans, showing no significant improvement over time. Research on more than 100 models across 80 coding tasks found common flaws such as SQL injection, cross site scripting, cryptographic weaknesses and log injection. Java was especially problematic, with average security pass rates as low as 28.5 percent. Hallucinated dependencies, where models invent non existent libraries, pose additional risks by enabling attackers to publish malicious packages under those names. The piece stresses that developers cannot rely on LLMs for secure code and must integrate thorough security validation, remediation tools and training into AI assisted development.  https://www.darkreading.com/application-security/llms-ai-generated-code-wild...

Critical flaw in CVE scoring undermines vulnerability prioritization

The article highlights that despite the flurry of new CVEs (over 33,000 in 2024), only a small fraction of vulnerabilities categorized as “critical” truly pose exploitable risks. In fact, recent analysis found that merely 12 percent of CVEs deemed critical by government agencies are legitimately that severe. In a dataset of 140 high-profile CVEs published in 2024, 88 percent of those marked critical and 57 percent of those marked high were over-ranked, with only 15 percent proving truly exploitable. This points to a growing misalignment between theoretical severity scores and real-world impact. The piece urges security teams to go beyond CVSS baselines and incorporate contextual analysis—assessing exploitability, applicability to their environment, and attack surface exposure—to better allocate limited resources and focus on the most meaningful threats.  https://www.darkreading.com/vulnerabilities-threats/critical-flaw-cve-scoring

Researchers bypass GPT-5 guardrails via narrative jailbreak and zero click agent attacks

Cybersecurity researchers uncovered a jailbreak technique called Echo Chamber that, combined with narrative driven steering, can bypass GPT-5’s safeguards. The method embeds subtle malicious context in early prompts and reinforces it through low salience storytelling, avoiding detection while nudging the model to produce restricted content. Harmless seeming requests can, over multiple turns, lead to harmful instructions, such as making a Molotov cocktail. The article also describes AgentFlayer, a set of zero click AI agent attacks that use prompt injections hidden in documents or cloud stored files to automatically exfiltrate sensitive data without user interaction.  https://thehackernews.com/2025/08/researchers-uncover-gpt-5-jailbreak-and.html

Misconfigurations are not vulnerabilities

The article clarifies that misconfigurations and vulnerabilities are distinct issues. Vulnerabilities are code level flaws in the SaaS provider’s platform that only the vendor can fix. Misconfigurations occur when customers incorrectly set up the service, such as granting excessive third party access or exposing internal tools, and are under the customer’s control. It emphasizes the shared responsibility model in SaaS, where providers secure the infrastructure and customers must correctly configure identity, permissions, data sharing and integrations. Misunderstanding this division can create dangerous blind spots.  https://thehackernews.com/2025/08/misconfigurations-are-not.html

AI crafted npm package drains Solana wallets from over 1500 users

A malicious npm package named @kodane/patch manager, created with the help of AI, was uploaded on July 28, 2025, posing as a legitimate tool for license validation and registry optimization. It contained a hidden postinstall script that deployed a cryptocurrency wallet drainer across Windows, macOS and Linux. The malware connected to a command and control server, identified victims, and drained Solana funds to a hard coded wallet. The package, which featured polished code with comments, emojis and styled documentation, was downloaded over 1500 times before removal.  https://thehackernews.com/2025/08/ai-generated-malicious-npm-package.html

Twenty years of cybersecurity consolidation

The article examines how the cybersecurity industry has consolidated over the past two decades, shrinking nearly two hundred companies into just eleven major players. Using the four stage framework of opening, scale, focus and balance, it shows how quickly the sector has matured and merged around dominant firms. It discusses the implications of this consolidation, the stage the industry is currently in, and what the next decade may bring. It includes a detailed visualization and analysis of past trends and potential future directions.  https://ventureinsecurity.net/p/20-years-of-cybersecurity-consolidation

Archive of 0day.today exploits

This repository preserves an archive of exploit data originally hosted on 0day.today, a long running public repository of proof of concept exploits and shellcode. In early 2025, 0day.today went offline and later returned without its previous content, leaving years of technical exploit data erased from the internet. Because the site used anti bot protections, much of its content was never archived publicly. This project serves as a historical preservation effort to prevent permanent loss, support security researchers, educators and defenders, and add context to CVEs that are poorly documented elsewhere. The archive includes a top level index.json listing metadata such as exploit ID, date, category, platform, author, CVEs, title and original link, with each exploit stored in its category directory as a plain text file named by its exploit ID. It is licensed under Apache 2.0.  https://github.com/vulncheck-oss/0day.today.archive

Scaling Netflix’s threat detection without streaming

The article describes Netflix’s experience building a real time threat detection pipeline in 2018 using a hybrid approach called the Psycho Pattern, combining Spark, Kafka or SQS, and Airflow with micro batch execution. The system worked but faced latency of 5 to 7 minutes, memory spikes and scaling issues. A later migration attempt to Flink streaming slightly reduced latency but did not improve detection quality and added engineering complexity. The main challenge was false positives and poor signal quality, which could have been addressed through better data validation, improved machine learning precision and smarter memory strategies. Key lessons include trusting micro batch when sufficient, treating the watermark table as the heartbeat of the system, prioritizing accuracy over speed and questioning technology changes before adopting them.  https://blog.dataexpert.io/p/scaling-netflixs-threat-detection

AI powered GitHub Action for real time security scans

 This repository provides a GitHub Action that uses Claude to automatically review code changes for security vulnerabilities. It scans pull requests in the CI/CD pipeline and posts inline comments highlighting issues like SQL injection, cross site scripting, authentication flaws, insecure data handling and dependency problems. Developers can also run ad hoc security checks from the terminal using the /security review command, which analyzes the codebase, explains detected issues and suggests or applies fixes. The project is open source, MIT licensed and created by Anthropic. https://github.com/anthropics/claude-code-security-review