Posts

Showing posts from January, 2026

RedBench: A Universal Dataset for Comprehensive Red Teaming of Large Language Models

The paper introduces RedBench , a unified dataset designed to improve how large language models (LLMs) are evaluated for safety and robustness. Existing red-teaming datasets are inconsistent in how they categorize risks and cover different types of attacks, which makes it hard to systematically test models. RedBench aggregates and standardizes 37 existing datasets into a consistent taxonomy of risk categories and domains, with tens of thousands of samples of both adversarial and refusal prompts. The authors analyze gaps in current datasets, provide baseline evaluations for modern LLMs, and open-source the dataset and evaluation code to support better, more comprehensive LLM safety research.  https://huggingface.co/papers/2601.03699

Ralph Wiggum Loop and the Need for a Principal Skinner Harness

 The article discusses a pattern for autonomous AI agents called the Ralph Wiggum loop , in which a model repeatedly runs in a stateless loop, feeding instructions into itself until a completion condition is met. This approach avoids context rot by resetting the model’s memory each iteration and relying on file systems or version control instead. While persistent iteration can make an agent tireless and effective on long tasks, it also creates governance risks because the agent may continue indefinitely or take harmful actions without supervision. To address this, the author argues that builders need a Principal Skinner harness , a structural control layer that enforces rules, monitors agent behavior, and prevents destructive actions. This harness intercepts and evaluates each tool call, implements deterministic safety controls, and distinguishes agent activity so that organizations can govern long-running autonomous agents safely. https://securetrajectories.substack.com/p/ralph-wi...

SBOMs in 2026: Some Love, Some Hate, Much Ambivalence

Cybersecurity experts remain divided about the value of software bills of materials (SBOMs) in 2026. In theory, SBOMs are praised for improving transparency and helping defenders understand what components make up software, which could aid vulnerability management. In practice, however, they are often messy, inconsistent, hard to generate accurately, and difficult to use at scale. The rapid evolution of software ecosystems and challenges in creating end-to-end verified component records have led to skepticism among some practitioners, while others still see potential if tooling and standards improve. Overall, the debate reflects mixed sentiments about how useful SBOMs actually are for improving security  https://www.darkreading.com/application-security/sboms-in-2026-some-love-some-hate-much-ambivalence

Latin American Organisations Lack Confidence in Cyber Defences

A report from the World Economic Forum shows that organisations in Latin America and the Caribbean have the lowest confidence in their country’s ability to defend critical infrastructure against cyberattacks , with only about 13% expressing confidence while nearly half lack faith in preparedness. This lack of trust reflects broader challenges including a shortage of cybersecurity skills , limited resources, and gaps in governance and infrastructure as digital ecosystems expand rapidly. The shortage of trained professionals is seen as a major factor weakening regional cyber resilience, and efforts to build talent and capability are needed to improve defences as threats grow.  https://www.darkreading.com/cyber-risk/latin-american-confidence-cyber-defenses-skills

CVE-2025-68428 Critical Path Traversal in jsPDF

The article explains a high-severity vulnerability tracked as CVE-2025-68428 in the popular jsPDF JavaScript library used to generate PDF files in web applications. The flaw is a path traversal issue that could allow attackers to craft malicious input enabling access to files outside of intended directories when jsPDF is used in certain server-side or file-serving contexts. If exploited, this can lead to unauthorized file access, potential data leakage, or the ability to include unintended local resources in generated PDFs. The article stresses the importance of updating to patched versions of jsPDF, reviewing use of the library in applications, and applying secure coding and input validation practices to mitigate such critical vulnerabilities before they can be abused in the wild.  https://www.endorlabs.com/learn/cve-2025-68428-critical-path-traversal-in-jspdf

Astronomer Modernizes AppSec with Endor Labs

The article describes how Astronomer , a data engineering company, improved its application security by adopting Endor Labs’ security platform . Astronomer faced challenges securing complex code pipelines, dependencies, and distributed environments using traditional tools. By integrating Endor Labs into its development processes, Astronomer gained automated detection of vulnerabilities , better visibility into risky software components, and real-time feedback for developers. The solution helped the team catch security issues earlier, reduce manual effort, and streamline secure deployment practices. The article highlights how proactive, integrated security tooling can help modern engineering teams protect software without slowing down development.  https://www.endorlabs.com/learn/astronomer-modernizes-appsec-with-endor-labs

AI-Aware Code Review Prevents Breaches

The article explains that traditional code review processes often miss subtle security vulnerabilities, especially as modern applications integrate complex dependencies and AI-generated code. By using AI-aware code review tools that understand security patterns, data flows, and attack techniques, development teams can catch issues earlier and reduce the risk of breaches. These tools analyze code in context, identify risky constructs, and provide guidance that goes beyond simple syntax checks. Integrating AI-driven security analysis into the development lifecycle helps teams improve overall code quality, prevent common coding mistakes that lead to vulnerabilities, and strengthen defenses before software is deployed. Continuous review, training, and automation are highlighted as best practices to make code reviews more effective and reduce the likelihood of security incidents.  https://www.endorlabs.com/learn/ai-aware-code-review-breaches

Ethereum Foundation Launches Post-Quantum Security Team

The Ethereum Foundation has elevated post-quantum cryptographic security to a top strategic priority by forming a dedicated team focused on preparing the blockchain for future quantum computing threats. Quantum computers could one day break the cryptographic algorithms that secure digital wallets and transactions, so the new group will research and develop quantum-resistant solutions to safeguard the network’s integrity and the value it protects. The initiative includes funding, developer sessions, community engagement, and collaboration on test networks aimed at building and implementing post-quantum cryptography well before such threats materialize, reflecting a proactive effort to future-proof Ethereum  https://cryptorank.io/news/feed/68046-ethereum-quantum-security-post-quantum-team

Do You Still Need Antivirus Software on Windows

Experts say that antivirus software remains important, but for many everyday users running Windows 11 the built-in Microsoft Defender security suite is often enough on its own. Defender provides real-time protection and frequent updates that compete with paid products, and when combined with sensible browsing habits it can protect most personal computers without extra software. However, for businesses or individuals handling sensitive data or facing more advanced threats like ransomware and phishing, additional specialized antivirus protection beyond Windows Security is still recommended.  https://www.bgr.com/2083446/windows-antivirus-necessary-according-experts/

ThreatModeler Acquires Competing Threat Modeling Startup IriusRisk

The article reports that ThreatModeler, a provider of threat modeling software, has acquired IriusRisk, a competing startup in the same space. This consolidation brings together two established tools used by organizations to identify, manage, and mitigate security threats in software and systems. The combined company aims to leverage the strengths of both platforms to offer broader capabilities and better support for enterprise customers, reducing manual effort and improving security workflows. The acquisition reflects ongoing demand for integrated security solutions as organizations seek to build secure software more efficiently.  https://siliconangle.com/2026/01/08/threatmodeler-acquires-competing-threat-modelling-startup-iriusrisk/

Software Supply Chain Security Is More Than Open Source

The webinar explains that focusing only on open source vulnerabilities is not enough to secure a software supply chain. While open source components are a critical part of modern development, there are other blind spots that also need attention. These include ensuring the integrity of build artifacts, securing development and deployment pipelines, protecting container images, and addressing emerging risks from components such as AI models. Effective software supply chain security requires a broader approach that goes beyond identifying open source flaws and includes securing all parts of the software delivery process, from code through deployment https://www.govinfosecurity.com/webinars/webinar-software-supply-chain-security-more-than-open-source-w-6759

Audio Accessory Flaw Turns Headphones into Spy Tools

A security flaw dubbed WhisperPair affects how many Bluetooth audio accessories implement Google’s Fast Pair protocol, allowing an attacker to force a wireless accessory such as headphones or earbuds to pair with a malicious device even when not in pairing mode. Once paired, an attacker could activate the microphone to eavesdrop on conversations, play sounds through the headphones, or track the victim’s location using features like device geolocation tracking. The vulnerability works at realistic distances without physical access and is present in products from multiple manufacturers. Fixing it requires firmware patches from the accessory makers, and updating the phone’s operating system alone may not protect users  https://www.govinfosecurity.com/audio-accessory-flaw-converts-headphones-into-spy-tool-a-30595

Why AI Keeps Falling for Prompt Injection Attacks

The article explains that prompt injection attacks remain a persistent vulnerability in AI systems because the foundational design of large language models lacks true understanding or control over how instructions are interpreted. Prompt injection works by embedding malicious directives into user input that the model then executes, often unintentionally. These attacks exploit the fact that AI models treat all text in a prompt as guidance, making it difficult to distinguish between legitimate instructions and harmful ones. Defensive measures like input sanitization, context filtering, and strict output controls help to some extent, but don’t fully solve the problem because models are built to follow the user’s words. The article argues that prompt injections are not bugs but a structural weakness of current AI architectures , and that meaningful mitigation will require rethinking how AI systems interpret and enforce boundaries between safe and unsafe instructions.  https://www.schn...

Congratulations to ThreatModeler and IriusRisk

The blog post celebrates the merger of two enterprise-grade threat modeling software companies, ThreatModeler and IriusRisk, into a single organization. The author welcomes this as a positive start to 2026 and explains that enterprise tools differ from simpler options by enabling issue tracking, change management, and better visibility into security work, reducing manual effort and freeing analysts to focus on higher-value tasks. He expresses enthusiasm for the combined team and the product development potential of uniting the strengths of both companies.  https://shostack.org/blog/congratulations-to-threatmodeler/

The ROI Problem in Attack Surface Management

The article discusses how many organizations struggle to show a clear return on investment for attack surface management (ASM) programs despite increasing risk exposure. As digital environments grow in complexity, security teams are expected to continuously discover, monitor, and reduce exposures across assets, cloud resources, credentials, APIs, and internet-facing services. However, ASM often generates large volumes of findings that are hard to prioritize, with business leaders questioning the value because it is difficult to link surface reduction directly to risk reduction or financial impact. The piece highlights the need for better metrics that align ASM outcomes with business priorities, actionable insights that help teams fix the most critical weaknesses, and a shift from raw discovery toward risk-based decision making. Without clear indicators of cost savings or risk reduction, investment in ASM can be hard to justify to executives. The article argues that security teams shoul...

The State of Trusted Open Source Software

The article explains that while open source software is widely used and valued for transparency and collaboration, trust in its security and reliability remains a concern. Many open source projects lack formal maintenance, governance, resources, or clear accountability, which can lead to vulnerabilities and unpatched issues. Organizations often depend heavily on community-maintained libraries without knowing who is responsible for updates or long-term support. The piece discusses efforts to improve the ecosystem by encouraging funding models, stronger governance structures, security auditing, and clearer ownership, so that critical open source components can be more dependable and sustainable as part of modern software infrastructure.  https://thehackernews.com/2026/01/the-state-of-trusted-open-source.html

Why Secrets in JavaScript Bundles Are Still Being Missed

 Many modern web applications accidentally expose sensitive information such as API keys, tokens, and credentials inside JavaScript bundles delivered to browsers. Large-scale scans have shown that tens of thousands of secrets are publicly accessible because traditional security tools often do not inspect bundled JavaScript thoroughly. Static analysis, infrastructure scanning, and dynamic testing commonly miss these exposures, especially in single-page applications and automated build pipelines. As a result, attackers can gain access to internal systems, repositories, and services. The article argues that organizations need dedicated detection focused on JavaScript bundles before deployment, since existing controls and reviews are not sufficient to prevent these leaks. https://thehackernews.com/2026/01/why-secrets-in-javascript-bundles-are.html

The Hidden Risk of Orphan Accounts

Orphan accounts are identities that remain active after their original owners, such as employees, contractors, services, or automated processes, are no longer present or accountable. These accounts often persist unnoticed due to fragmented identity systems and poor visibility, retaining credentials and sometimes elevated privileges. Because they lack clear ownership, orphan accounts are difficult to audit and easy for attackers to exploit, creating security, compliance, and operational risks. Reducing this threat requires continuous visibility into all identities, clear ownership, and automated processes to detect and remove accounts that are no longer needed.  https://thehackernews.com/2026/01/the-hidden-risk-of-orphan-accounts.html

Who Is Responsible for AI Agents

AI agents are increasingly acting autonomously inside organizations, accessing systems and making decisions without clear approval or ownership. As they evolve, they often accumulate excessive permissions, creating accountability gaps and security risks. Traditional identity and access models are not designed for these agents, allowing them to bypass controls and act beyond user authority. To reduce risk, organizations must treat AI agents as first-class identities, with defined owners, limited access, and continuous monitoring.  https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html