Posts

Showing posts with the label news

FOSDEM 2026’s SBOMs and Supply Chains Track Focuses on Practical Software Supply Chain Security

The SBOMs and Supply Chains track at the 2026 FOSDEM conference in Brussels is a full-day series of technical talks and presentations centered on Software Bills of Materials (SBOMs) and broader supply chain concerns in open source ecosystems. Sessions cover real-world SBOM generation and management challenges, integrating vulnerability eXchange (VEX) into development workflows, policy-as-code for active defense, large-scale SBOM collection and use, new standards like SPDX 3.1, semantic modeling of supply chains, and case studies from embedded systems to build-time tooling, giving practitioners insight into both practical tooling and evolving supply chain security practices. https://fosdem.org/2026/schedule/track/sboms-and-supply-chains/

Fortinet Releases Patch for Critical SQL Injection Flaw in FortiClientEMS

Fortinet has issued security updates to fix a critical SQL injection vulnerability (CVE-2026-21643) in FortiClientEMS that allows an unauthenticated attacker to send specially crafted HTTP requests and potentially execute arbitrary code or system commands on vulnerable servers, carrying a high severity score. Administrators are urged to immediately upgrade affected 7.4.4 installations to the patched version to prevent compromise, while the broader Fortinet ecosystem continues to face multiple recent serious flaws. https://thehackernews.com/2026/02/fortinet-patches-critical-sqli-flaw.html

NPM Revamps Authentication to Reduce Supply-Chain Risk but Vulnerabilities Persist

The article describes how the npm package ecosystem implemented a significant overhaul of its authentication system in December 2025 following high-profile supply-chain attacks, replacing long-lived, broadly scoped tokens with short-lived session-based credentials and promoting OIDC trusted publishing to limit compromise risk. While these changes improve security by expiring credentials faster and encouraging multifactor authentication for publishing, optional MFA bypass and phishing-based credential theft still leave projects vulnerable to malware injection and supply-chain breaches, meaning additional safeguards and best practices are still needed.  https://thehackernews.com/2026/02/npms-update-to-harden-their-supply.html

AI Discovers Twelve Previously Unknown OpenSSL Vulnerabilities

The blog post reports that in the January 27, 2026 OpenSSL security release, twelve new zero-day vulnerabilities were disclosed that had not previously been known to the project’s maintainers, and an AI system from a security research team was credited with originally finding and responsibly reporting all of them during 2025. Ten received 2025 CVE identifiers and two received 2026 identifiers, with several long-standing flaws that had eluded decades of manual auditing and fuzzing, and in some cases the AI also proposed accepted patches, signaling a major impact of automated discovery on cybersecurity research and defenses. https://www.schneier.com/blog/archives/2026/02/ai-found-twelve-new-vulnerabilities-in-openssl.html

Side-Channel Attacks Threaten the Privacy of Large Language Model Interactions

The essay highlights recent research showing that side-channel attacks can extract sensitive information from large language models by observing indirect signals like response timing, packet sizes, and speculative decoding behavior, even when the communication is encrypted, and the content itself is not visible to an attacker. These studies demonstrate that metadata and implementation details can leak user query topics, language, or confidential data, underscoring an urgent need for better defenses as LLMs are deployed in sensitive contexts. https://www.schneier.com/blog/archives/2026/02/side-channel-attacks-against-llms.html

AI Models Now Uncover Hundreds of Previously Unknown Zero-Day Vulnerabilities

Anthropic’s Frontier Red Team explains how the company’s latest AI model, Claude Opus 4.6, has shown an unprecedented ability to autonomously find high-severity zero-day vulnerabilities in widely used open-source code without specialized instructions or tooling, reading and reasoning about code in ways traditional fuzzers do not. In tests it identified and helped validate over 500 previously unknown security flaws across major codebases, and the post also discusses efforts to report and patch these issues while building safeguards to manage the dual-use risks of such powerful automated discovery capabilities.  https://red.anthropic.com/2026/zero-days/

Top Post-Quantum Cryptography Solutions and Vendors Ranked for Quantum-Safe Security

The article reviews and ranks nine post-quantum cryptography providers whose products use NIST-approved quantum-resistant algorithms to safeguard systems as quantum computing advances, driven by impending federal mandates and increasing enterprise demand. It evaluates vendors on security strength, performance, ease of integration, deployment history, use-case support, and roadmap, highlighting offerings spanning blockchain protection, enterprise crypto-agility, hardware-level security, PKI lifecycle tools, quantum entropy key systems, and quantum key distribution to address the transition from classical cryptography to quantum-safe defenses.  https://aijourn.com/top-9-post-quantum-cryptography-solutions-compared-pqc-providers-ranked/

Vulnerable VS Code extensions put millions of developers at risk

Security researchers at OX Security have found serious vulnerabilities in four widely used Visual Studio Code extensions downloaded over 120 million times, revealing that even “verified” extensions can be manipulated to perform harmful operations at the operating-system level and expose sensitive developer data and credentials, potentially enabling lateral movement across networks and full compromise of development environments. The maintainers have so far not responded to responsible disclosures, prompting calls for mandatory security assessments, automated vulnerability scanning, and enforceable response requirements to protect developers as reliance on IDE extensions and AI coding tools grows.  https://www.techzine.eu/news/devops/138878/vulnerable-vs-code-extensions-affect-tens-of-millions-of-developers/

AI-Generated Code Frequently Repeats Architectural Mistakes with Serious Security Consequences

The article explains that AI coding assistants often introduce subtle but systemic architectural design flaws into software, not just simple bugs that traditional security tools can detect. Because these tools replicate patterns they see in a codebase without real understanding of architectural context, they can propagate insecure structures like missing authentication, improper role assignment, weak cryptography, and lack of auditing. A study cited found most AI completions had at least one such design flaw and many were invisible to static analysis, creating accumulating security debt unless developers explicitly guide AI with architectural intent and use tools that assess design assumptions.  https://www.endorlabs.com/learn/design-flaws-in-ai-generated-code

Preparing Organizations for the Shift to Post-Quantum Cryptography

The article explains why organizations must start migrating from traditional cryptographic algorithms to post-quantum cryptography. Advances in quantum computing threaten to break widely used algorithms such as RSA and ECC, putting long-term data confidentiality at risk. The text emphasizes the need for early planning, including inventorying cryptographic assets, identifying where vulnerable algorithms are used, and designing a phased migration strategy. It highlights crypto-agility as essential, allowing systems to adapt as standards evolve. Migration is presented as a gradual, multi-year effort rather than a one-time change.  https://www.wileyconnect.com/migrating-from-traditional-algorithms-to-post-quantum-cryptography-what-your-organization-needs-to-know

MaliciousCorgi AI Extensions Steal Code from Over 1.5 Million Developers

A security research team has uncovered a malicious campaign dubbed “MaliciousCorgi” involving two Visual Studio Code extensions with a combined 1.5 million installs that pose as helpful AI coding assistants but secretly harvest and exfiltrate developers’ code and activity data without consent. The extensions, still live on the official VS Code Marketplace, not only read and transmit entire files opened in the editor but also include hidden profiling and server-controlled harvesting mechanisms that can collect batches of files and metadata, exposing sensitive credentials, source code, and workspace information to remote servers in China  https://www.koi.ai/blog/maliciouscorgi-the-cute-looking-ai-extensions-leaking-code-from-1-5-million-developers

Critical Remote Code Execution Bug in n8n Workflow Automation Platform

A severe security flaw tracked as CVE-2026-25049 has been disclosed in the n8n open-source workflow automation platform that allows authenticated users with permission to create or modify workflows to execute arbitrary system commands on the underlying host, potentially compromising the entire server and sensitive data and credentials stored there. The vulnerability arises from inadequate sanitization in the expression evaluation mechanism and impacts versions of n8n prior to 1.123.17 and 2.5.2, with a CVSS severity score of 9.4. Users are urged to update to the patched releases immediately to mitigate the risk.  https://thehackernews.com/2026/02/critical-n8n-flaw-cve-2026-25049.html

Prompt Injection Is Not SQL Injection

The blog explains that while prompt injection and SQL injection both involve untrusted input influencing system behavior, they are fundamentally different. SQL injection exploits how structured queries are interpreted by a database engine, whereas prompt injection manipulates how an AI model interprets or continues a natural language instruction. Because AI models don’t enforce boundaries or a defined grammar the way a database does, traditional defenses like parameterization don’t directly apply. The post warns against treating prompt injection like a conventional code injection flaw and suggests designing AI-involved systems with explicit context isolation, careful prompt construction, and runtime constraints so untrusted content can’t alter intended instructions.  https://www.ncsc.gov.uk/blog-post/prompt-injection-is-not-sql-injection

Hacking Clawdbot and Eating Lobster Souls

The post describes how the author examined real-world deployments of Clawdbot , an open-source AI agent that connects large language models to messaging platforms and can execute tools for users. He found hundreds of publicly exposed control interfaces that give attackers easy access to credentials, conversation histories, and command execution on behalf of the owner. Because many deployments were misconfigured or left with development defaults, they exposed API keys, bot tokens, OAuth secrets, and even root access. The article uses this “butler gone rogue” metaphor to highlight the security trade-offs of autonomous agents and stresses the need for better defaults, hardened configurations, and careful consideration of the risks posed by pervasive, autonomous AI infrastructure.  https://www.linkedin.com/pulse/hacking-clawdbot-eating-lobster-souls-jamieson-o-reilly-whhlc/

Automated React2Shell Vulnerability Patching Now Available

Vercel announced that it has added automatic patching for the React2Shell vulnerability across its platform. This means Vercel will now detect projects affected by this security issue and apply patches without requiring manual steps from developers. The update improves security by reducing the window of exposure and lowering the operational burden on teams who might otherwise have to identify vulnerable dependencies and fix them manually. This automated capability helps ensure that applications deployed on Vercel remain protected against the specific React2Shell risk with minimal intervention from developers. https://vercel.com/changelog/automated-react2shell-vulnerability-patching-is-now-available

Public Container Registry Security Risks and Malicious Images

The article explains that public container registries pose significant security risks because anyone can publish images there, including potentially malicious actors. Threats include images with embedded malware, cryptojacking tools, backdoors, or names mimicking legitimate images to trick users. The piece highlights how attackers can exploit weak naming conventions, typosquatting, and unattended or abandoned images to get users to pull harmful content. It discusses credential leakage when images are built with secrets, lack of image provenance and trust metadata, and insufficient scanning for known vulnerabilities. The article recommends mitigating these risks by using signed and provenance-verified images, enforcing registry access controls, scanning images for malware and vulnerabilities before deployment, establishing internal trusted registries or mirrors, and implementing supply chain security practices so that only vetted and traceable images are used in production.  https:/...

Signing Your Artifacts for Security, Quality, and Compliance

The article explains why signing software artifacts matters for trust, security, and regulatory requirements. It shows how cryptographic signatures prove who built a release and ensure that its contents haven’t been tampered with, making supply chain attacks and unauthorized modifications easier to detect. It discusses common signing technologies like GPG and X.509 certificates, how they integrate with build systems and package ecosystems, and why reproducible builds are important to validate signatures. The article also covers practical best practices such as managing signing keys securely, automating signing in CI/CD pipelines, and validating signatures when consuming artifacts to improve quality assurance and meet compliance obligations.  https://www.endorlabs.com/learn/signing-your-artifacts-for-security-quality-and-compliance

GitHub Actions Can Be Dependencies Too

The article explains that workflows and actions used in GitHub Actions aren’t just configuration files but can introduce real dependencies and risks because they execute code from potentially external sources. It shows how actions from the marketplace, public repositories, or even referenced by git URLs and tags can change and pull in updated code, making them difficult to control. The piece walks through examples of how an attacker could compromise an action or influence workflow behavior and recommends treating actions like code dependencies: use pinned versions, review code before using it, host trusted actions internally, and monitor for changes. It stresses that without careful management, Actions can create supply-chain security problems just like libraries or packages.  https://www.endorlabs.com/learn/surprise-your-github-actions-are-dependencies-too

What to Look for in AI Compliance Tools

The article argues that AI compliance cannot be handled with spreadsheets or traditional GRC tools because AI systems generate high-volume, dynamic interactions through APIs and prompts. Effective AI compliance tools must monitor AI usage in real time, especially at the API layer, capture prompt and response context, and automatically map activity to recognized frameworks like OWASP LLM Top 10 and MITRE ATLAS. The focus shifts from documenting intent and policies to observing actual AI behavior, producing continuous evidence, detecting violations early, and supporting audits through automated, operational visibility.  https://securityboulevard.com/2026/01/ai-compliance-tools-what-to-look-for-firetail-blog/

How California Polytechnic State University Centralized IT With AppsAnywhere (special post)

This time I will do something different here. I don't have 1h to watch this https://www.youtube.com/watch?v=H_MYZmGT3UY Do I've took the transcription and I've asked chatgpt some questions what they did They centralized campus IT into a service-oriented organization, moved core systems to the cloud, and created a single “software hub” where students, faculty, and staff can access all approved software. They standardized software versions across labs, personal devices, and virtual labs, enabled self-service downloads and remote access, integrated support, knowledge base, and service catalog into one front door, and used data and analytics to manage usage, licensing, support demand, and continuous improvement. how they did They did it by first reorganizing IT from siloed technical teams into a plan–build–run, service-focused model, with clear ownership, documentation, and operational gates before services went live. They moved infrastructure to the cloud, adopted a single ser...