Posts

Showing posts from December, 2025

STRIDE GPT AI-Powered Threat Modeling Web App

STRIDE GPT is a web-based application that uses large language models to help teams create threat models automatically based on the STRIDE methodology. Users describe their application’s architecture and security-relevant context, and the tool generates a comprehensive list of threats categorized by STRIDE, as well as optional attack trees, DREAD risk scores, suggested mitigations, and even Gherkin test cases. It supports multiple LLM providers and aims to simplify design-phase threat analysis, making proactive security assessment more accessible. https://stridegpt.streamlit.app/

Threat Modeling Tool Directory on GitHub

The Toreon Threat Modeling Tool Directory on GitHub is a curated list of tools that support or automate the design-time threat modeling process . It focuses exclusively on software, code, libraries, or services that help practitioners systematically identify, analyze, and mitigate threats during system design. The directory lists a variety of tools — from classic diagram and risk-analysis applications to newer AI-augmented threat modeling tools — and specifies inclusion criteria that emphasize practical support for threat modeling workflows, excluding operational threat intelligence or purely conceptual frameworks. The repository invites contributions to expand and enhance the list of available tools. https://github.com/Toreon/Threat-Modeling-Tool-Directory

Why Data Security and Privacy Should Start in Code

The article explains that the rapid rise of AI-assisted coding and app generation has dramatically expanded the number of applications and the speed of change, outpacing traditional data security and privacy approaches that are largely reactive. It argues that many existing tools only detect issues after data is already in production and miss hidden flows to third-party and AI integrations. To address this, embedding detection and governance controls directly into development is essential. The piece highlights proactive code-level analysis as a way to catch sensitive data exposure, outdated data maps, and unmanaged AI use early, suggesting that prevention at the source is more effective than relying on post-deployment tools. It also profiles a privacy code scanner that traces sensitive data and generates compliance documentation to help maintain privacy as code evolves.  https://thehackernews.com/2025/12/why-data-security-and-privacy-need-to.html

The Psychology of Bad Code Part 2 – Building Systems That Support Secure Developer Behavior

The article argues that insecure code isn’t due to laziness or malice but is rooted in human behavior under pressure and incentives, and that security programs should focus on creating systems that make secure decisions easier. It proposes secure defaults, embedding security practices into the software development lifecycle, and using tools to guide developers toward secure choices. It also emphasizes training that builds habits rather than just knowledge and measuring success by behavior change instead of compliance metrics. https://shehackspurple.ca/2025/12/23/the-psychology-of-bad-code-part-2-building-systems-that-support-secure-developer-behavior/

Docker Makes Hardened Images Free in Container Security Shift

Docker has made its catalogue of more than 1,000 hardened container images freely available under an open source Apache 2.0 licence, removing previous commercial restrictions and potentially raising the overall security baseline for containers. These Docker Hardened Images are built on Debian and Alpine, strip out unnecessary components to minimize attack surface, include SBOMs and cryptographic provenance, and aim to reduce vulnerabilities by up to 95 percent compared to traditional images. The move responds to escalating supply chain threats and includes additional tooling such as Hardened Helm Charts and hardened servers for AI workloads. Docker will continue to offer enterprise tiers with SLAs for faster CVE remediation and extended lifecycle support, while the free offering enhances accessibility for all developers.  https://www.infoq.com/news/2025/12/docker-hardened-images/

Is Vibe Coding Secure? Conflicting Insights from Two Key Studies

This LinkedIn article examines two recent, credible studies that appear to contradict each other on the security of AI-generated or "vibe coded" applications. The first, SusVibes, found that while AI models like Claude 4 Sonnet achieved 61% functional correctness on complex, real-world coding tasks, over 80% of that working code contained serious security vulnerabilities (e.g., code injection, logic flaws), with only 10.5% of solutions being fully secure. The second study by Invicti, which generated over 20,000 simple web apps, found a more optimistic picture: modern LLMs have dramatically improved at avoiding basic vulnerabilities like SQL injection and XSS but systematically introduced new, predictable risks by replicating hardcoded secrets (like "supersecretkey"), common credentials, and standard endpoints from their training data. The article reconciles these findings by highlighting their different scopes: Invicti's study shows AI is better at basic securit...

Scanner Tool for Detecting Critical "React2Shell" RCE Vulnerabilities in React and Next.js

This GitHub repository contains a comprehensive scanning toolset designed to detect and remediate two critical, unauthenticated remote code execution (RCE) vulnerabilities—CVE-2025-55182 (React) and CVE-2025-66478 (Next.js)—both rated CVSS 10.0. Dubbed "React2Shell," this flaw in the React Server Components (RSC) Flight protocol allows a single crafted HTTP request to deserialize into server-side code execution on vulnerable systems. The project provides two primary tools: a Software Composition Analysis (SCA) scanner to identify vulnerable dependencies in a codebase, and a web Dynamic Application Security Testing (DAST) scanner to actively probe live endpoints and validate exploitability in production environments. The web scanner includes a full test lab with exploit examples and is capable of scanning targets at scale, generating multiple report formats, and correlating findings with known attack patterns. The repository emphasizes that this is a critical security incident...

Malicious GitHub Repositories Masquerading as OSINT and AI Tools to Deliver PyStoreRAT Malware

A new malware campaign is distributing a previously undocumented, modular remote access trojan (RAT) called **PyStoreRAT** via deceptive GitHub repositories. The threat actors create and promote Python repositories that pose as legitimate Open Source Intelligence (OSINT) tools, AI utilities, or security software, gaining popularity and trust on the platform. After building credibility, they silently add a malicious payload in a "maintenance" commit; this payload is a simple loader that downloads and executes a remote HTA file, deploying the PyStoreRAT malware. PyStoreRAT acts as a sophisticated backdoor capable of downloading and running additional payloads (like the Rhadamanthys stealer), executing scripts in memory, stealing cryptocurrency wallet data, and maintaining persistence via a disguised scheduled task. The campaign, which shows signs of Eastern European origin, highlights how attackers are abusing the inherent trust in platforms like GitHub to distribute stealthy, ...

A Practical Guide to Mitigating Browser Extension Risks in the Wake of the ShadyPanda Campaign

Following the exposure of the long-running "ShadyPanda" campaign—which saw malicious actors compromise popular Chrome and Edge extensions with millions of installs—this article provides a guide for organizations to reduce browser extension risks. The attack demonstrated how a trusted extension can be silently updated to become spyware, stealing session cookies to hijack authenticated SaaS accounts and bypassing multi-factor authentication. To defend against such supply-chain attacks, the article recommends four key steps: 1) Enforce extension allow lists and governance by vetting and approving only necessary extensions; 2) Treat extension access with the same caution as third-party OAuth app access, integrating it into identity management; 3) Conduct regular audits of extension permissions and publisher details; 4) Implement technical monitoring and user awareness programs to detect suspicious extension behavior. The core message is that browsers, as a critical bridge between...

Google to Discontinue Dark Web Monitoring Service in 2026, Citing Lack of Actionable Guidance

Google has announced it will shut down its Dark Web Report tool in February 2026, less than two years after making it widely available. The company stated that user feedback indicated the tool provided general information but did not offer clear, actionable steps for users to protect themselves. Key dates include the cessation of new scans on January 15, 2026, and the complete retirement of the feature on February 16, 2026. Google will delete all associated user data at that time but allows users to delete their monitoring profiles manually beforehand. The tool, initially launched for Google One subscribers and later expanded to all users, scanned the dark web for personal information like names, emails, and Social Security numbers. Google is now encouraging users to adopt other security measures like passkeys and to use its "Results about you" tool to remove personal information from search results.  https://thehackernews.com/2025/12/google-to-shut-down-dark-web-monitoring.h...

Why Prompt Injection is Fundamentally Different and More Dangerous Than SQL Injection

The article from the UK's National Cyber Security Centre (NCSC) argues that while prompt injection in generative AI systems is often superficially compared to SQL injection, this analogy is misleading and dangerous for designing mitigations. The key difference is foundational: in SQL, a clear technical boundary exists between "data" and "instructions," allowing for complete mitigations like parameterized queries. In contrast, large language models (LLMs) process all input as a sequence of tokens without an inherent understanding of this separation, making them an "inherently confusable deputy." Consequently, prompt injection likely cannot be fully "fixed" in the classical sense. Instead, the risk must be managed through secure system design—such as strictly limiting the LLM's privileges based on the data source it's processing, using techniques to mark untrusted content, and implementing robust monitoring—while accepting it as a persi...

Cyber Deception in Practice: Key Findings and Future Directions from UK-Wide Trials

The UK's National Cyber Security Centre (NCSC) has completed a year-long, large-scale trial of cyber deception technologies, involving 121 organizations and 14 commercial providers. The initiative aimed to test whether defensive tactics like honeypots can improve threat detection, uncover hidden network compromises, and influence attacker behavior. Key findings reveal that while cyber deception is a valuable tool for increasing visibility and imposing costs on adversaries, it is not a plug-and-play solution; its success depends on clear strategy and proper configuration to avoid generating noise or new vulnerabilities. A significant barrier is inconsistent industry terminology, which confuses organizations, and most prefer to keep their use of deception covert, despite evidence that public awareness of it can disrupt attackers. The NCSC concludes there is a strong case for broader adoption in the UK and plans to develop new guidance and services to help organizations understand, im...

Top Kubernetes Security Vulnerabilities and Key Risks Teams Face Today

This article details nine critical Kubernetes security vulnerabilities and common misconfigurations that pose significant risks to modern cloud infrastructure. It highlights that a vast majority of security incidents in Kubernetes environments stem from misconfigurations rather than inherent software flaws. The key risks include exposed dashboards and APIs, over-privileged access through RBAC misconfigurations, and running pods with dangerous privileges like root or container escape capabilities. Specific high-profile vulnerabilities are analyzed, such as a Windows node privilege escalation flaw and a critical issue in the deprecated gitRepo volume type that allows host execution. The article emphasizes that third-party add-ons like ingress controllers and CSI drivers are a major source of vulnerabilities, and it warns against poor secrets management. Overall, the guidance stresses that consistent patching, enforcing least-privilege access, and proactively scanning for misconfiguration...

The Danger of Incomplete Fixes: How a Patched Vulnerability in Argo Workflows Still Allowed Remote Code Execution

Security researchers discovered a critical vulnerability, CVE-2025-66626, in Argo Workflows by analyzing a previous security patch. The patch for an earlier ZipSlip flaw, CVE-2025-62156, intended to prevent path traversal via symlinks during artifact extraction. However, the fix contained a logical flaw: it validated a constructed safe file path but then used the original, attacker-controlled path to create the symbolic link. This mismatch allowed an attacker to craft a tarball that would create symlinks pointing to sensitive locations outside the secure working directory, such as /etc or /tmp, enabling arbitrary file writes. The researchers further demonstrated that this file write primitive could be exploited for full remote code execution by overwriting a specific initialization file executed when a Kubernetes pod starts. The article emphasizes that security patches should be treated as signals for further review, not as conclusive fixes, and warns against over-relying on framework ...

How Regex Bypasses Led to CVE-2025-13780 in pgAdmin

This Endor Labs post explains a critical Remote Code Execution vulnerability (CVE-2025-13780) in pgAdmin 4 caused by relying on a simple regex-based filter to block dangerous meta-commands in uploaded SQL dumps. The built-in check scanned raw bytes looking for backslash-prefixed commands, but attackers crafted payloads with whitespace sequences (like carriage returns or UTF-8 byte order marks) that the regex didn’t catch while the underlying psql tool still treated them as valid meta-commands, enabling arbitrary shell execution during a restore. The researchers walk through how the flawed filter worked, show concrete bypass payloads, and argue that regex is the wrong tool for security-critical input validation. They note that pgAdmin 9.11 mitigates this by using psql’s restricted mode instead of pre-filtering with regex, shifting enforcement into the component that actually runs the script, and recommend upgrading and hardening environments. https://www.endorlabs.com/learn/when-regex-i...

Top 5 API Vulnerabilities of 2025 According to APISecurity.io

The APISecurity.io newsletter Issue 286 reviews the five most common API vulnerabilities seen across 2025, highlighting recurring security gaps that developers must fix. The top issue was missing authentication, where sensitive endpoints didn’t require any login, followed by Broken Object Level Authorization (BOLA), which lets attackers access other users’ data by tampering with identifiers. Excessive data exposure was also frequent, with APIs returning more fields than necessary. Broken function-level authorization allowed unauthorized role actions, and broken authentication mechanisms like weak password handling rounded out the list, showing that fundamental access controls remain the biggest API security risks of the year. https://apisecurity.io/issue-286-the-apisecurity-io-top-5-api-vulnerabilities-in-2025/

Why the MITRE ATT&CK Framework Actually Works

The article explains that traditional security often leaves analysts overwhelmed by reactive alerts with little context, and MITRE ATT&CK provides a solution by mapping real-world adversary behavior into a living matrix of tactics and techniques that show how attacks unfold rather than just what happened. ATT&CK, developed by the nonprofit MITRE, organizes observed attacker actions into a structured taxonomy that teams can use to align detection logic, gap-analysis, and defense strategy with actual adversary behavior. By tagging detection rules with ATT&CK technique IDs and measuring coverage across tactics, organizations gain visibility into where they are strong or weak, anticipate attacker moves instead of chasing noise, and continuously improve security posture as the framework evolves with real threat intelligence.  https://levelup.gitconnected.com/why-the-mitre-att-ck-framework-actually-works-29ac26d2d20c

Why Dependency Cooldowns Improve Open Source Supply Chain Security

The post argues that developers should delay automatically adopting newly published open-source packages for a short “cooldown” period before using them so that security scanners and researchers have time to detect and report compromised releases, reducing exposure to supply chain attacks that typically exploit new versions within hours or days of release. It explains that implementing cooldowns is free and easy with tools like Dependabot and Renovate, and that a modest waiting period could have prevented most recent high-impact attacks, while cautioning that cooldowns aren’t a perfect fix and may delay urgent security updates. https://blog.yossarian.net/2025/11/21/We-should-all-be-using-dependency-cooldowns

OWASP Social OSINT Agent Deep-Dive

The OWASP Social OSINT Agent is an open-source autonomous intelligence tool built to gather, analyze, and synthesize publicly available social media data across platforms like Twitter/X, Reddit, GitHub, Hacker News, Bluesky, and Mastodon, using text and vision-capable large language models via any OpenAI-compatible API to produce coherent analytical reports from scattered activity. It supports flexible fetch controls, intelligent rate-limit handling, structured prompt-based analysis, robust caching to reduce API calls, offline mode, interactive CLI and Docker deployment, and both interactive and programmatic report generation. This project aims to help security professionals automate deep open-source intelligence investigations by turning raw social data into structured insights.  https://github.com/bm-github/owasp-social-osint-agent

React2Shell: Deep Dive Into CVE-2025-55182

The Wiz blog analyzes a critical vulnerability in React Server Components called React2Shell (CVE-2025-55182) that allows unauthenticated remote code execution (RCE) by exploiting insecure deserialization of incoming payloads, and while early reports focused on Next.js because it exposes this feature by default, the issue affects any framework using the vulnerable RSC logic. Wiz’s research shows active exploitation in the wild where attackers chain the bug to drop cryptominers, harvest cloud and developer credentials, gain interactive shells in containerized workloads, and install persistent backdoors. They explain how the exploit works at a technical level with crafted “gadget” payloads that trigger arbitrary server-side execution, note that other ecosystems like Vite and Waku with RSC support are also at risk, and emphasize that defenders must patch to fixed releases and use detection tools to find and mitigate compromised instances. https://www.wiz.io/blog/nextjs-cve-2025-55182-reac...

Zombie Workflows: GitHub Actions Vulnerabilities and Platform Fix

The SonarSource blog explains how GitHub Actions, a popular CI/CD automation system, can be exploited through insecure workflows that run on the pull_request_target event because they may expose secrets or privileged tokens to untrusted code execution, a class of issues known as Pwn Requests. When vulnerabilities are “fixed” only in the default branch, attackers can still trigger older vulnerable versions of workflow files from other branches, a pattern the authors call “Zombie Workflows.” Their research found hundreds of potentially vulnerable workflows across popular repositories, and they reported these to maintainers. GitHub has changed the behavior of pull_request_target so that workflow versions are taken from the default branch to mitigate this issue, but developers still need to guard against other workflow vulnerabilities and can use tools like SonarQube to scan for them. https://www.sonarsource.com/blog/zombie-workflows-a-github-actions-horror-story

Shift Left Enterprise-Scale at Cloudflare

Cloudflare describes “shifting left” as bringing security and validation earlier in the development process by embedding testing, security audits, and compliance checks into the CI/CD pipeline so issues are caught before deployment, reducing risk and human error. To manage hundreds of internal production accounts consistently and securely, they moved from manual dashboard changes to managing configurations as Infrastructure as Code using Terraform, a custom CI/CD setup, and a centralized monorepo, with peer review and automated policy enforcement built in. They define security baselines and policies in code with Open Policy Agent, enforce them at merge request time, and handle exceptions through formal requests. Along the way they faced challenges such as onboarding legacy manual configurations, managing drift between code and deployed state, and keeping tools in sync with Cloudflare’s evolving APIs, but found that proactive automation improves both security and engineering velocity. h...

GuardScan — Privacy-First Free AI Code Review & Security Scanner

GuardScan is a free, open-source CLI tool for code security, quality, and review. It performs static analyses to detect hard-coded secrets, dependency vulnerabilities, OWASP-Top-10 style flaws, insecure Docker/IaC configurations, license/compliance issues, and code smells. Optionally, it can integrate with your own AI provider (e.g. OpenAI, Claude, Gemini, or a local model) to offer AI-enhanced features: code review, explanations, documentation generation, test generation, refactoring suggestions, commit-message generation, threat modeling, and more — all while keeping your source code local and private. Because GuardScan runs fully on your machine (or infrastructure), it doesn’t require uploading code to third-party services; it’s free forever, and designed to work offline or in air-gapped environments.  https://github.com/ntanwir10/GuardScan

Researchers Hack Gemini CLI Through Prompt Injections in GitHub Actions

Researchers found that Google’s Gemini CLI could be exploited through prompt-injection attacks when used in GitHub Actions. By hiding malicious instructions inside files like README.md or other repository content, attackers could trick the CLI into executing arbitrary shell commands with full privileges. The issue came from weak command validation, which allowed harmful payloads to be appended to seemingly safe commands. Google patched the flaw after disclosure. The case shows how integrating AI tools into CI/CD pipelines can create new, high-impact security risks when untrusted content is processed as prompts.  itsecuritynews.info/researchers-hack-googles-gemini-cli-through-prompt-injections-in-github-actions/

Top Docker Security Tools for 2026

The article from Aikido.dev argues that deploying Docker containers brings convenience but also big security risks — a single vulnerable container image can be replicated many times, spreading risk across infrastructure. To address that, the article presents a curated list of the best Docker (and container) security tools, comparing their strengths and use cases: among them are Aikido Security, Anchore, Aqua Security, Prisma Cloud, Falco, Snyk Container and Qualys Container Security. Each tool covers different needs: image-scanning before deployment (vulnerability detection, SBOM generation, compliance), runtime protection (threat detection, behavior monitoring), or a full container lifecycle approach. The main message is that container security should be multi-layered: scanning images before deployment, validating configurations and dependencies, and monitoring running containers. The article highlights that there is no one-size-fits-all — many teams will benefit from combining tools...

The Future of DevSecOps: From Shifting Left to Shifting Smart

The article argues that the traditional “shift-left” approach — pushing security checks early in development — is no longer enough. As release cycles accelerate, developers are overburdened, causing security steps to be skipped or rushed. Instead, we need a “shift-smart” model: security that’s continuous, context-aware and automated throughout the software lifecycle. That means unifying all tooling and data across build and runtime, using intelligent automation to prioritize relevant vulnerabilities, and having feedback loops that learn from production incidents. In this new model, security becomes ambient and adaptive — less extra work for developers, and more proactive protection for applications.  https://devops.com/the-future-of-devsecops-from-shifting-left-to-shifting-smart/

AI in CI/CD Pipelines Can Be Manipulated

Researchers found that AI tools used inside CI/CD pipelines can be tricked into running harmful commands. Attackers can insert malicious text into issues, pull requests, or commits, which the pipeline’s AI interprets as instructions. Because these agents often run with high privileges, this can lead to code changes, data exposure, or other serious impacts. The article warns teams to avoid feeding untrusted user content to AI prompts, restrict AI permissions, and treat AI-generated output as untrusted code.  https://www.csoonline.com/article/4101751/ai-in-ci-cd-pipelines-can-be-tricked-into-behaving-badly-2.html

DevOps Still Awaits Its “Cursor Moment”

The article says AI has transformed coding but not operations. DevOps work is still fragmented, manual, and risky, with engineers juggling dashboards, logs, pipelines, and production incidents. Infrastructure is harder for AI because it involves real-time risk, cloud context, compliance, identity, costs, and environment-specific setups. A true “Cursor for DevOps” would need secure access, a unified orchestration layer, human approvals, auditing, and specialized agents for areas like Kubernetes and compliance. Early gains exist, but the field still lacks a mature, reliable AI operations assistant.  https://thenewstack.io/devops-is-still-waiting-for-its-cursor-moment/

Has GitLab’s 2025 Share-Price Slump Created a Buying Window?

GitLab’s stock has fallen roughly 44 percent over the past year, significantly underperforming other tech companies. The decline is driven mainly by market doubts about how quickly the company can turn its AI features into meaningful revenue, rather than by weak performance. Despite the drop, GitLab has reported solid financial results, including strong revenue growth and healthy margins. The article suggests the downturn may offer a potential buying opportunity, but this depends on GitLab proving that its AI strategy can generate consistent long-term demand.  https://finance.yahoo.com/news/gitlab-2025-share-price-slump-002112904.html

Trust Beyond Containers: KubeCon 2025’s Shift Toward Identity and Agent Security

The GitGuardian article explains that KubeCon + CloudNativeCon NA 2025 marked a major shift in how the cloud-native community thinks about security. Instead of relying on network boundaries or IP-based controls, the conference emphasized identity-first security. As Kubernetes environments increasingly run AI workloads, securing containers is no longer enough; organizations must secure automated agents, machine identities, and AI-driven services. The article highlights growing adoption of technologies like SPIFFE and SPIRE to create federated trust domains, allowing systems to authenticate based on strong, verifiable identity. According to the author, the future of cloud-native security will depend on consistent identity governance across containers, clusters, and AI agents, redefining trust at every layer of modern infrastructure.  https://blog.gitguardian.com/kubecon-2025/

Securing Salesforce by Catching Misconfigurations Early

The article explains that Salesforce has evolved into a full business-critical platform, so simple configuration mistakes can create major security risks. Overly broad permissions, unmanaged third-party apps, and low-code customizations can introduce hidden vulnerabilities. Configuration drift — when settings slowly diverge from secure baselines — is a major cause of breaches. It emphasizes that organizations must treat Salesforce like an application-security environment: continuously monitor settings, apply least-privilege access, govern low-code development, protect sandbox environments, and automate audits to catch issues before they turn into data exposure.  https://www.scworld.com/resource/salesforce-security-in-a-shared-responsibility-world-catching-misconfigurations-and-drift-before-they-become-breaches

Zero Trust Evolves Into Built-In Security

Zero trust isn’t fading — it’s being absorbed directly into modern architectures. Instead of bolting on controls, organizations are embedding identity validation, least privilege, continuous monitoring, and segmentation into platforms, cloud services, and development pipelines. This shift reduces complexity, improves default security, and aligns with “security by design.” Vendors now deliver zero-trust principles as native capabilities, allowing teams to focus on governance and configuration rather than building everything manually. The model is maturing from a framework to an inherent architectural baseline.  https://www.scworld.com/perspective/zero-trust-isnt-dead-its-becoming-built-in-security-by-design

CISA Likely to Start 2026 Without a Director

The article explains that CISA will probably enter 2026 without a Senate-confirmed director because nominee Sean Plankey was left out of a recent advancement vote. His nomination is stuck for procedural and political reasons unrelated to his qualifications, including disputes involving telecommunications oversight and a Coast Guard contracting issue. Operating without permanent leadership could weaken CISA’s ability to plan strategically, coordinate national cybersecurity efforts, and manage long-term initiatives during a period of growing cyber threats.  https://www.govinfosecurity.com/no-vote-no-leader-cisa-faces-2026-without-director-a-30208

Command-injection flaw discovered in fast-git-clone (unsafe CLI args lead to arbitrary code execution)

The blog post describes a serious security vulnerability in fast-git-clone (a command-line tool for cloning Git repositories). The tool takes a repository URL from user input and builds a shell command by concatenating unfiltered arguments — this allows attackers to append arbitrary shell commands instead of just a repository URL. For example, running fgc clone "; touch /tmp/clonepwn #" would create a file /tmp/clonepwn , showing code execution beyond the intended git clone . Because many users run CLI tools like fast-git-clone under usual privileges, this vulnerability allows any user with access to run the tool to execute malicious commands on their system. The disclosure notes that maintainers did not respond to repeated security-report attempts. https://www.nodejs-security.com/blog/command-injection-vulnerability-via-unsanitized-cli-arguments-in-touxing-fast-git-clone/

Why Google Is Betting on Post-Quantum Cryptography Instead of Quantum Key Distribution

The article explains that Google sees post-quantum cryptography as the only practical path to a quantum-safe internet. PQC algorithms can be deployed today using existing infrastructure, allowing upgrades to encryption, signatures, and key exchange without special hardware. QKD, in contrast, requires dedicated quantum-communication equipment, is expensive, and cannot scale to global networks. Google emphasizes crypto-agility and phased PQC adoption to protect against future “store now, decrypt later” threats, arguing that broad, internet-wide security must rely on solutions that are both robust and deployable at scale.  https://bughunters.google.com/blog/4625466008862720/google-s-commitment-to-a-quantum-safe-future-why-pqc-is-google-s-path-forward-and-not-qkd

The Psychology Behind Bad Code

The article explains that insecure code usually comes from human factors rather than incompetence. Developers often work under stress, deadlines, and unclear incentives, which makes insecure shortcuts feel reasonable in the moment. Cognitive biases and the pressure to deliver quickly encourage copying unsafe code, skipping tests or documentation, and focusing on features instead of quality. The author argues that improving security requires changing environments and processes so secure behavior becomes the easiest path, shifting the focus from blaming developers to fixing systemic pressures that produce bad code. https://shehackspurple.ca/2025/11/27/the-psychology-of-bad-code/

How to Land Your First Cybersecurity Job

The article explains that beginners should start by choosing a focus area such as application security, cloud security, incident response, DevSecOps or penetration testing. You do this by exploring roles, reading job descriptions, testing beginner courses and talking to people in the field. After choosing a path, finding a mentor and joining supportive communities helps you stay motivated and connected. The author emphasizes hands-on learning through labs, courses, workshops, personal projects and volunteering for small security tasks at your current workplace. Building a public portfolio, polishing your professional profile and applying even when you don’t meet every requirement are key steps to breaking into the field.  https://shehackspurple.ca/2025/11/21/how-to-get-your-first-job-in-cybersecurity/

CrowdStrike Reveals Hidden Vulnerabilities in AI-Generated Code

CrowdStrike researchers found that code produced by the DeepSeek-R1 model frequently contains security flaws. With neutral prompts, about one in five outputs were vulnerable. When prompts included politically or culturally sensitive terms, the rate of insecure code rose sharply, reaching more than a quarter of all samples. The issues included hard-coded secrets, unsafe input handling, weak or missing authentication, and even broken code presented as secure. The findings reinforce that AI-generated code requires the same security review and testing as human-written code.  https://www.crowdstrike.com/en-us/blog/crowdstrike-researchers-identify-hidden-vulnerabilities-ai-coded-software/

Adversarial Poetry Can Jailbreak LLMs — Even in a Single Prompt

The paper shows that rewriting harmful or restricted requests as poetry (“adversarial poetry”) can reliably bypass safety mechanisms in large language models (LLMs). Across 25 state-of-the-art models, hand-crafted poetic prompts achieved an average “attack success rate” of 62%, with some models exceeding 90% success. Even when standard harmful prompts from a broad safety benchmark were converted automatically into verse, the poetic versions produced up to 18× higher success rates than their prose originals — showing this vulnerability isn’t limited to a few handcrafted examples.  The authors argue that poetic style — its metaphors, rhythm, and unconventional structure — alone suffices to evade guardrails across many risk domains (cyber-offense, harmful manipulation, dangerous instructions, etc.), exposing a systemic weakness in current alignment and safety-evaluation practices.  https://arxiv.org/html/2511.15304v1

LLVM Gets Built-in “Constant-Time” Support to Better Secure Cryptographic Code

The article describes how Trail of Bits added new compiler-level support to LLVM to help cryptographic code remain safe from timing attacks. They introduced a new intrinsic, __builtin_ct_select, which forces certain operations (like conditional selection) to compile into “constant-time” machine code — meaning their execution time doesn’t vary with secret data. This avoids situations where compiler optimizations accidentally reintroduce timing vulnerabilities in otherwise careful crypto implementations. Because the intrinsic acts as a barrier to optimizer transformations, code using it preserves constant-time behavior across all compilation stages, with only minimal performance overhead. The change has drawn interest from maintainers of cryptographic libraries in languages like C, Rust and environments such as WebAssembly. In short, this work makes it much safer — and easier — for developers to write portable, secure cryptographic code without resorting to hand-written assembly.  h...

What Does “Empirical Security” Really Mean — And Why It Matters

The article argues that security shouldn’t be based on intuition or best-guess frameworks, but instead on real data about how security measures and failures actually play out in the wild. It emphasizes that people who take on the role of “security champions” often carry a heavy psychological burden: they see risks and push for better practices when many others don’t notice or care. The piece calls for combining technical defenses with compassionate, people-centric approaches — studying how security actually works in real organizations and learning from lived experiences. It suggests that empirical research and data-driven experiments can give security practitioners evidence to guide decisions and help reduce the loneliness and burnout of those trying to champion privacy and security.  https://www.fightforthehuman.com/empirical-security/

FDA’s Four Required Security Views for Medical Devices

The article explains that new FDA guidance requires manufacturers to provide four security-architecture views when submitting connected medical devices. The Global System View maps all components and data flows. The Multi-Patient Harm View shows how design prevents a breach from affecting multiple patients. The Updateability & Patchability View details how secure updates are delivered throughout the device’s life. The Security Use Case Views demonstrate how security controls behave in real clinical workflows.  https://bluegoatcyber.com/blog/examining-the-fdas-recommended-security-architecture-views-for-medical-device-security/

Critical React Vulnerability (React Server Components) Added to CISA Exploited-Vulnerabilities List After Active “React2Shell” Attacks

The article reports that the vulnerability known as React2Shell (CVE-2025-55182), a critical remote-code-execution bug in React Server Components with a maximum severity score (CVSS 10.0), has been officially included by CISA in its catalog of exploited vulnerabilities. The flaw — an insecure deserialization issue in how React decodes payloads sent to server endpoints — can be triggered by an unauthenticated attacker sending a specially crafted HTTP request, allowing arbitrary code execution on affected servers. According to the advisory, the vulnerability affects React packages (react-server-dom-webpack, react-server-dom-parcel, react-server-dom-turbopack) in versions 19.0.0 through 19.2.0. The bug also impacts frameworks built on React (notably Next.js), even when applications do not explicitly use server functions but simply support server components. Exploitation was observed in the wild soon after disclosure, including use by China-linked threat actors, prompting urgent calls for ...

What “Vulnerability” Means in Risk Analysis from a FAIR Perspective

The article explains that in the FAIR risk-analysis model, vulnerability is not defined as a system weakness but as the probability that a threat event will result in a loss event. It contrasts this with common language and traditional cybersecurity usage, where vulnerability usually means a flaw or weakness. By treating vulnerability as a conditional probability, FAIR enables clear, quantitative risk calculations by combining it with the frequency of threat events. The article argues that this precise definition avoids ambiguity and supports more rigorous risk assessments.  https://www.fairinstitute.org/blog/what-is-vulnerability

ZAP Adds Built-in Detection for React2Shell Vulnerability

The latest release of Zed Attack Proxy (ZAP) includes mechanisms to detect the critical‐severity vulnerability known as React2Shell (CVE-2025-55182 / CVE-2025-66478), which allows remote code execution in servers using React Server Components — including apps built with Next.js. The announcement says ZAP now offers two detection methods: a passive scan via the Retire.js add-on, and a new “Active Scan Rules” check specifically for React2Shell. Because the vulnerability is so serious and widespread, the team promoted the detection rule directly to “release” quality, noting it makes only a single request per host while offering a highly reliable check.  https://www.zaproxy.org/blog/2025-12-05-react2shell-detection-with-zap/