Posts

Showing posts from February, 2026

Google on Building More Secure and Efficient Software Supply Chains

Google outlines strategies and tools to improve the security and efficiency of software supply chains, emphasizing trends like reproducible builds, standardized provenance metadata, and automated verification of artifacts. The post highlights initiatives such as in-toto and Sigstore that help ensure integrity from source code to deployment, as well as best practices for dependency hygiene, attestations, and cryptographic signing. It stresses collaboration across the ecosystem to reduce risks from compromised builds, dependency confusion, and injected malicious code.  https://security.googleblog.com/2026/02/cultivating-robust-and-efficient.html

Latin America’s Cybersecurity Maturity Trails Rapidly Evolving Threats

This article highlights that many Latin American nations remain behind global peers in cybersecurity preparedness even as threat activity in the region grows sharply. Governments, critical infrastructure and private sectors often lack comprehensive incident response plans, up-to-date defenses, skilled personnel and secure cloud practices. Regulatory efforts are uneven, and adop­tion of frameworks like Zero Trust is nascent. Experts warn that without greater investment in tools, training and governance, the region will continue to struggle against increasingly sophisticated ransomware, espionage and supply-chain attacks, leaving businesses and citizens at heightened risk. ( darkreading.com ) https://www.darkreading.com/threat-intelligence/latin-americas-cyber-maturity-lags-threat-landscape

Attackers Use New Tool to Scan for React2Shell Exposure

Security researchers report that threat actors are now using a newly identified toolkit called “ILovePoop” to scan tens of millions of IP addresses for servers vulnerable to the critical React2Shell flaw (CVE-2025-55182), a high-severity remote code execution vulnerability found in React Server Components and frameworks like Next.js. Initially exploited in broad, noisy campaigns dropping cryptominers and botnets, activity has evolved into more deliberate reconnaissance against high-value targets including government, defense, finance and industrial organizations. The ongoing scanning underscores that the vulnerability remains actively pursued worldwide months after disclosure. ( darkreading.com )  https://www.darkreading.com/application-security/attackers-new-tool-scan-react2shell-exposure

ForgeProof Code Provenance for the AI Era Overview

ForgeProof, presented on a dedicated Flying Cloud Technology landing page, appears to be a code provenance and security offering aimed at the “AI era” —likely focused on tracking and verifying the origin, integrity, and history of code artifacts to improve supply chain trust and security in environments that heavily rely on AI and automated development workflows (context from related mentions of code provenance tools and data surveillance products from Flying Cloud Technology). Flying Cloud itself provides patented data surveillance and enterprise data security solutions that monitor and defend data usage, lineage, and compliance across environments, and ForgeProof seems positioned as part of extending that trust into code and AI contexts. ( flyingcloudtech.com )  https://forgeproof.flyingcloudtech.com/ also read  https://www.reddit.com/r/devsecops/comments/1rgugcw/why_were_opensourcing_a_code_provenance_tool_now/

CVE Severity Distribution for Linux Kernel

The article analyzes Common Vulnerabilities and Exposures (CVE) data for the Linux kernel, showing that in 2024 the kernel accumulated 3,108 CVEs, a 79 % increase from 2023, with high-severity flaws making up about 42 % and critical issues around 4.8 % of all entries. The piece breaks down severity categories using CVSS v3.1 scores, highlights that networking and memory management subsystems generate a large share of vulnerabilities, and compares Linux’s CVE counts to other operating systems, noting that the open-source model’s transparency contributes to larger totals. ( commandlinux.com ) https://commandlinux.com/statistics/common-vulnerabilities-and-exposures-cve-severity-distribution-for-linux/

LLMs Generate Predictable Passwords

In this blog post Bruce Schneier explains that large language models (LLMs), including tools like ChatGPT, often produce weak and predictable password suggestions when prompted to generate credentials. Because their outputs are based on patterns learned from common text, the passwords they suggest tend to resemble each other and lack sufficient randomness and entropy, making them easy targets for guessing or brute-force attacks. Schneier argues that relying on LLM-generated passwords weakens security and that truly random password generators or password managers are safer choices for creating strong credentials.  https://www.schneier.com/blog/archives/2026/02/llms-generate-predictable-passwords.html

Claude Code Security’s Market Shock

Anthropic’s launch of Claude Code Security, an AI-driven tool that scans codebases for vulnerabilities and proposes fixes, has rattled the cybersecurity industry and investors, pushing down stocks of major vendors like CrowdStrike and Palo Alto Networks. The capability places Anthropic in direct competition with established application security providers by using reasoning-based analysis rather than traditional static scanning, though experts say it currently covers only a small part of broader security needs. Despite hype and volatility, long-term investment in cybersecurity innovation remains steady. ( govinfosecurity.com )  https://www.govinfosecurity.com/blogs/claude-code-security-has-shaken-cybersecurity-market-p-4056

The Invisible Key: Securing the New OAuth Token Attack Vector

This talk explains how modern attackers increasingly “log in” rather than break in by abusing OAuth tokens and delegated authorization flows. It reviews OAuth as an authorization framework, common grant flows, and the role of scopes and third-party applications. The speaker highlights how tokens, often lacking MFA and visibility in logs, become powerful yet opaque credentials that security teams struggle to monitor. The session emphasizes the risks of poor scope management, token misuse, and limited oversight, urging stronger visibility, validation, and control over token-based authentication and machine-to-machine access.  https://fosdem.org/2026/schedule/event/DMVVQ9-securing-new-attack-vector-oauth-tokens/

Benchmarking CodeThreat’s Contextual AI SAST Engine Summary

The blog benchmarks CodeThreat’s AI-powered static application security testing (SAST) engine against other tools using a custom dataset of real-world projects seeded with vulnerabilities. The evaluation shows CodeThreat detecting a high percentage of both technical and business-logic flaws with no false positives, outperforming several traditional rule-based scanners. It emphasizes the importance of contextual analysis that understands developer intent, data flow, and project structure, and highlights how reducing noise and catching complex, multi-file issues improves practical security outcomes. ( codethreat.com ) https://www.codethreat.com/blogs/benchmarking-codethreat%E2%80%99s-contextual-ai-sast-engine

TMDD Threat Modeling-Driven Development Tool Summary

TMDD is an open-source Python-based CLI tool for integrating continuous threat modeling into software development workflows. It uses a lightweight, YAML-based framework that lets you define and maintain threat models alongside your code, helping teams identify and document potential security threats early. TMDD supports generating structured threat descriptions, validating models, and producing reports, and can also assist AI coding assistants in writing more secure code by feeding them security-aware prompts based on the threat model. ( github.com )  https://github.com/attasec/tmdd

CycloneDX BOM Studio Visual Editor Summary

CycloneDX BOM Studio is an open-source, browser-based visual editor for creating, editing, validating, and exporting CycloneDX Bills of Materials (BOMs) without needing command-line tools or manual JSON editing. It provides structured forms, real-time schema validation, dependency visualization, and support for multiple CycloneDX specification versions, making it easier to build accurate software or supply chain inventory manifests for security and compliance. ( github.com )  https://github.com/CycloneDX/cyclonedx-bom-studio

Gandalf AI Prompt Injection Game Summary

Gandalf is an interactive AI challenge by Lakera where players try to outsmart a chatbot named Gandalf into revealing a secret password that it has been instructed not to share. The game has multiple levels with increasing defenses, illustrating how prompt injection techniques can trick or fail against evolving AI safeguards. Users must craft clever inputs to bypass rules and extract hidden information, making it a hands-on way to learn about AI security and prompt engineering.  https://gandalf.lakera.ai/do-not-tell Ps. thanks  https://www.linkedin.com/in/rgcampos/

Mapping Deception with BloodHound OpenGraph Summary

This SpecterOps blog explains how defenders can design and visualize high-fidelity cyber deception using BloodHound OpenGraph to map realistic attacker paths across Active Directory and third-party systems. It stresses that effective deception should be specific and believable, leveraging attack path visualization to place canary tokens, honeypots, and other decoys where attackers are likely to encounter them. Using OpenGraph to model, reuse or even convert known attack paths into deception opportunities can help funnel attackers into detection and strengthen overall security posture.  https://specterops.io/blog/2025/12/23/mapping-deception-with-bloodhound-opengraph

Sandworm Mode npm Worm Supply Chain Attack

The Socket Research Team disclosed a sophisticated supply-chain malware campaign dubbed SANDWORM_MODE that uses typosquatted npm packages to infect developer environments and CI workflows. This worm-style attack harvests npm/GitHub tokens, environment secrets, and SSH keys, then exfiltrates them and propagates by modifying repositories and injecting malicious GitHub Actions. It also goes further by poisoning AI development toolchains through rogue MCP servers that manipulate AI coding assistants to expose additional credentials, highlighting an evolving threat targeting both traditional CI pipelines and AI-assisted workflows.  https://socket.dev/blog/sandworm-mode-npm-worm-ai-toolchain-poisoning

Samsung CredSweeper Credential Detection Tool

CredSweeper is an open source credential detection tool by Samsung that scans directories and files to identify exposed sensitive information like passwords, API keys, tokens and other credentials before they leak. It analyzes source code, configuration files, documents and compressed/binary formats using pattern matching, filtering and optional machine-learning validation to reduce false positives and provides detailed output showing where and what type of credential was found. ( github.com )  https://github.com/Samsung/CredSweeper https://github.com/Samsung/CredData

Titus Open Source Secret Scanner Overview

Titus is an open source secret scanning tool developed by Praetorian and written in Go. It detects exposed credentials, API keys, tokens, and other sensitive data across source code, binaries, archives, notebooks, and HTTP traffic using more than 450 detection rules. It can operate as a CLI tool, Go library, Burp Suite extension, or Chrome extension, and supports optional validation of discovered secrets through controlled API checks to determine if they are active.  https://www.praetorian.com/blog/titus-open-source-secret-scanner

Mini Python SIEM for SSH Brute-Force Detection

SOC-Mini SIEM Correlation Engine simulates the core logic of a basic Security Information and Event Management (SIEM) correlation engine in Python that mimics workflows inside a Security Operations Center (SOC). It reads simulated SSH authentication and firewall logs, applies simple correlation rules such as counting repeated SSH failures and checking firewall blocks, and outputs structured JSON alerts with severity and MITRE ATT&CK mapping for brute-force credential attacks. It’s a hands-on learning project demonstrating log analysis, event correlation and alert generation using basic Python and regex.  https://github.com/sejosegomesneto-creator/soc-mini-siem-correlation-engine

FOSDEM 2026’s SBOMs and Supply Chains Track Focuses on Practical Software Supply Chain Security

The SBOMs and Supply Chains track at the 2026 FOSDEM conference in Brussels is a full-day series of technical talks and presentations centered on Software Bills of Materials (SBOMs) and broader supply chain concerns in open source ecosystems. Sessions cover real-world SBOM generation and management challenges, integrating vulnerability eXchange (VEX) into development workflows, policy-as-code for active defense, large-scale SBOM collection and use, new standards like SPDX 3.1, semantic modeling of supply chains, and case studies from embedded systems to build-time tooling, giving practitioners insight into both practical tooling and evolving supply chain security practices. https://fosdem.org/2026/schedule/track/sboms-and-supply-chains/

Fortinet Releases Patch for Critical SQL Injection Flaw in FortiClientEMS

Fortinet has issued security updates to fix a critical SQL injection vulnerability (CVE-2026-21643) in FortiClientEMS that allows an unauthenticated attacker to send specially crafted HTTP requests and potentially execute arbitrary code or system commands on vulnerable servers, carrying a high severity score. Administrators are urged to immediately upgrade affected 7.4.4 installations to the patched version to prevent compromise, while the broader Fortinet ecosystem continues to face multiple recent serious flaws. https://thehackernews.com/2026/02/fortinet-patches-critical-sqli-flaw.html

NPM Revamps Authentication to Reduce Supply-Chain Risk but Vulnerabilities Persist

The article describes how the npm package ecosystem implemented a significant overhaul of its authentication system in December 2025 following high-profile supply-chain attacks, replacing long-lived, broadly scoped tokens with short-lived session-based credentials and promoting OIDC trusted publishing to limit compromise risk. While these changes improve security by expiring credentials faster and encouraging multifactor authentication for publishing, optional MFA bypass and phishing-based credential theft still leave projects vulnerable to malware injection and supply-chain breaches, meaning additional safeguards and best practices are still needed.  https://thehackernews.com/2026/02/npms-update-to-harden-their-supply.html

AI Discovers Twelve Previously Unknown OpenSSL Vulnerabilities

The blog post reports that in the January 27, 2026 OpenSSL security release, twelve new zero-day vulnerabilities were disclosed that had not previously been known to the project’s maintainers, and an AI system from a security research team was credited with originally finding and responsibly reporting all of them during 2025. Ten received 2025 CVE identifiers and two received 2026 identifiers, with several long-standing flaws that had eluded decades of manual auditing and fuzzing, and in some cases the AI also proposed accepted patches, signaling a major impact of automated discovery on cybersecurity research and defenses. https://www.schneier.com/blog/archives/2026/02/ai-found-twelve-new-vulnerabilities-in-openssl.html

Side-Channel Attacks Threaten the Privacy of Large Language Model Interactions

The essay highlights recent research showing that side-channel attacks can extract sensitive information from large language models by observing indirect signals like response timing, packet sizes, and speculative decoding behavior, even when the communication is encrypted, and the content itself is not visible to an attacker. These studies demonstrate that metadata and implementation details can leak user query topics, language, or confidential data, underscoring an urgent need for better defenses as LLMs are deployed in sensitive contexts. https://www.schneier.com/blog/archives/2026/02/side-channel-attacks-against-llms.html

AI Models Now Uncover Hundreds of Previously Unknown Zero-Day Vulnerabilities

Anthropic’s Frontier Red Team explains how the company’s latest AI model, Claude Opus 4.6, has shown an unprecedented ability to autonomously find high-severity zero-day vulnerabilities in widely used open-source code without specialized instructions or tooling, reading and reasoning about code in ways traditional fuzzers do not. In tests it identified and helped validate over 500 previously unknown security flaws across major codebases, and the post also discusses efforts to report and patch these issues while building safeguards to manage the dual-use risks of such powerful automated discovery capabilities.  https://red.anthropic.com/2026/zero-days/

Top Post-Quantum Cryptography Solutions and Vendors Ranked for Quantum-Safe Security

The article reviews and ranks nine post-quantum cryptography providers whose products use NIST-approved quantum-resistant algorithms to safeguard systems as quantum computing advances, driven by impending federal mandates and increasing enterprise demand. It evaluates vendors on security strength, performance, ease of integration, deployment history, use-case support, and roadmap, highlighting offerings spanning blockchain protection, enterprise crypto-agility, hardware-level security, PKI lifecycle tools, quantum entropy key systems, and quantum key distribution to address the transition from classical cryptography to quantum-safe defenses.  https://aijourn.com/top-9-post-quantum-cryptography-solutions-compared-pqc-providers-ranked/

Vulnerable VS Code extensions put millions of developers at risk

Security researchers at OX Security have found serious vulnerabilities in four widely used Visual Studio Code extensions downloaded over 120 million times, revealing that even “verified” extensions can be manipulated to perform harmful operations at the operating-system level and expose sensitive developer data and credentials, potentially enabling lateral movement across networks and full compromise of development environments. The maintainers have so far not responded to responsible disclosures, prompting calls for mandatory security assessments, automated vulnerability scanning, and enforceable response requirements to protect developers as reliance on IDE extensions and AI coding tools grows.  https://www.techzine.eu/news/devops/138878/vulnerable-vs-code-extensions-affect-tens-of-millions-of-developers/

AI-Generated Code Frequently Repeats Architectural Mistakes with Serious Security Consequences

The article explains that AI coding assistants often introduce subtle but systemic architectural design flaws into software, not just simple bugs that traditional security tools can detect. Because these tools replicate patterns they see in a codebase without real understanding of architectural context, they can propagate insecure structures like missing authentication, improper role assignment, weak cryptography, and lack of auditing. A study cited found most AI completions had at least one such design flaw and many were invisible to static analysis, creating accumulating security debt unless developers explicitly guide AI with architectural intent and use tools that assess design assumptions.  https://www.endorlabs.com/learn/design-flaws-in-ai-generated-code

Preparing Organizations for the Shift to Post-Quantum Cryptography

The article explains why organizations must start migrating from traditional cryptographic algorithms to post-quantum cryptography. Advances in quantum computing threaten to break widely used algorithms such as RSA and ECC, putting long-term data confidentiality at risk. The text emphasizes the need for early planning, including inventorying cryptographic assets, identifying where vulnerable algorithms are used, and designing a phased migration strategy. It highlights crypto-agility as essential, allowing systems to adapt as standards evolve. Migration is presented as a gradual, multi-year effort rather than a one-time change.  https://www.wileyconnect.com/migrating-from-traditional-algorithms-to-post-quantum-cryptography-what-your-organization-needs-to-know

MaliciousCorgi AI Extensions Steal Code from Over 1.5 Million Developers

A security research team has uncovered a malicious campaign dubbed “MaliciousCorgi” involving two Visual Studio Code extensions with a combined 1.5 million installs that pose as helpful AI coding assistants but secretly harvest and exfiltrate developers’ code and activity data without consent. The extensions, still live on the official VS Code Marketplace, not only read and transmit entire files opened in the editor but also include hidden profiling and server-controlled harvesting mechanisms that can collect batches of files and metadata, exposing sensitive credentials, source code, and workspace information to remote servers in China  https://www.koi.ai/blog/maliciouscorgi-the-cute-looking-ai-extensions-leaking-code-from-1-5-million-developers

Critical Remote Code Execution Bug in n8n Workflow Automation Platform

A severe security flaw tracked as CVE-2026-25049 has been disclosed in the n8n open-source workflow automation platform that allows authenticated users with permission to create or modify workflows to execute arbitrary system commands on the underlying host, potentially compromising the entire server and sensitive data and credentials stored there. The vulnerability arises from inadequate sanitization in the expression evaluation mechanism and impacts versions of n8n prior to 1.123.17 and 2.5.2, with a CVSS severity score of 9.4. Users are urged to update to the patched releases immediately to mitigate the risk.  https://thehackernews.com/2026/02/critical-n8n-flaw-cve-2026-25049.html