Posts

Showing posts from January, 2026

Prompt Injection Is Not SQL Injection

The blog explains that while prompt injection and SQL injection both involve untrusted input influencing system behavior, they are fundamentally different. SQL injection exploits how structured queries are interpreted by a database engine, whereas prompt injection manipulates how an AI model interprets or continues a natural language instruction. Because AI models don’t enforce boundaries or a defined grammar the way a database does, traditional defenses like parameterization don’t directly apply. The post warns against treating prompt injection like a conventional code injection flaw and suggests designing AI-involved systems with explicit context isolation, careful prompt construction, and runtime constraints so untrusted content can’t alter intended instructions.  https://www.ncsc.gov.uk/blog-post/prompt-injection-is-not-sql-injection

Hacking Clawdbot and Eating Lobster Souls

The post describes how the author examined real-world deployments of Clawdbot , an open-source AI agent that connects large language models to messaging platforms and can execute tools for users. He found hundreds of publicly exposed control interfaces that give attackers easy access to credentials, conversation histories, and command execution on behalf of the owner. Because many deployments were misconfigured or left with development defaults, they exposed API keys, bot tokens, OAuth secrets, and even root access. The article uses this “butler gone rogue” metaphor to highlight the security trade-offs of autonomous agents and stresses the need for better defaults, hardened configurations, and careful consideration of the risks posed by pervasive, autonomous AI infrastructure.  https://www.linkedin.com/pulse/hacking-clawdbot-eating-lobster-souls-jamieson-o-reilly-whhlc/

Automated React2Shell Vulnerability Patching Now Available

Vercel announced that it has added automatic patching for the React2Shell vulnerability across its platform. This means Vercel will now detect projects affected by this security issue and apply patches without requiring manual steps from developers. The update improves security by reducing the window of exposure and lowering the operational burden on teams who might otherwise have to identify vulnerable dependencies and fix them manually. This automated capability helps ensure that applications deployed on Vercel remain protected against the specific React2Shell risk with minimal intervention from developers. https://vercel.com/changelog/automated-react2shell-vulnerability-patching-is-now-available

Public Container Registry Security Risks and Malicious Images

The article explains that public container registries pose significant security risks because anyone can publish images there, including potentially malicious actors. Threats include images with embedded malware, cryptojacking tools, backdoors, or names mimicking legitimate images to trick users. The piece highlights how attackers can exploit weak naming conventions, typosquatting, and unattended or abandoned images to get users to pull harmful content. It discusses credential leakage when images are built with secrets, lack of image provenance and trust metadata, and insufficient scanning for known vulnerabilities. The article recommends mitigating these risks by using signed and provenance-verified images, enforcing registry access controls, scanning images for malware and vulnerabilities before deployment, establishing internal trusted registries or mirrors, and implementing supply chain security practices so that only vetted and traceable images are used in production.  https:/...

Signing Your Artifacts for Security, Quality, and Compliance

The article explains why signing software artifacts matters for trust, security, and regulatory requirements. It shows how cryptographic signatures prove who built a release and ensure that its contents haven’t been tampered with, making supply chain attacks and unauthorized modifications easier to detect. It discusses common signing technologies like GPG and X.509 certificates, how they integrate with build systems and package ecosystems, and why reproducible builds are important to validate signatures. The article also covers practical best practices such as managing signing keys securely, automating signing in CI/CD pipelines, and validating signatures when consuming artifacts to improve quality assurance and meet compliance obligations.  https://www.endorlabs.com/learn/signing-your-artifacts-for-security-quality-and-compliance

GitHub Actions Can Be Dependencies Too

The article explains that workflows and actions used in GitHub Actions aren’t just configuration files but can introduce real dependencies and risks because they execute code from potentially external sources. It shows how actions from the marketplace, public repositories, or even referenced by git URLs and tags can change and pull in updated code, making them difficult to control. The piece walks through examples of how an attacker could compromise an action or influence workflow behavior and recommends treating actions like code dependencies: use pinned versions, review code before using it, host trusted actions internally, and monitor for changes. It stresses that without careful management, Actions can create supply-chain security problems just like libraries or packages.  https://www.endorlabs.com/learn/surprise-your-github-actions-are-dependencies-too

What to Look for in AI Compliance Tools

The article argues that AI compliance cannot be handled with spreadsheets or traditional GRC tools because AI systems generate high-volume, dynamic interactions through APIs and prompts. Effective AI compliance tools must monitor AI usage in real time, especially at the API layer, capture prompt and response context, and automatically map activity to recognized frameworks like OWASP LLM Top 10 and MITRE ATLAS. The focus shifts from documenting intent and policies to observing actual AI behavior, producing continuous evidence, detecting violations early, and supporting audits through automated, operational visibility.  https://securityboulevard.com/2026/01/ai-compliance-tools-what-to-look-for-firetail-blog/

How California Polytechnic State University Centralized IT With AppsAnywhere (special post)

This time I will do something different here. I don't have 1h to watch this https://www.youtube.com/watch?v=H_MYZmGT3UY Do I've took the transcription and I've asked chatgpt some questions what they did They centralized campus IT into a service-oriented organization, moved core systems to the cloud, and created a single “software hub” where students, faculty, and staff can access all approved software. They standardized software versions across labs, personal devices, and virtual labs, enabled self-service downloads and remote access, integrated support, knowledge base, and service catalog into one front door, and used data and analytics to manage usage, licensing, support demand, and continuous improvement. how they did They did it by first reorganizing IT from siloed technical teams into a plan–build–run, service-focused model, with clear ownership, documentation, and operational gates before services went live. They moved infrastructure to the cloud, adopted a single ser...

How Centralized Application Management Simplifies Campus IT Operations

Centralized application management allows campus IT teams to support remote access while maintaining compliance and reducing operational strain. By managing applications from a single platform, IT can streamline software distribution across devices without outdated imaging or complex virtualization tools. This approach makes it easier to ensure consistent configurations, reduce helpdesk load, improve user experience for students and staff, and maintain regulatory or licensing compliance, ultimately simplifying overall IT operations on campus.  https://www.timeshighereducation.com/campus/how-centralized-application-management-simplifies-campus-it-operations

Shostack on the NIST SSDF v1.2 Draft

 Adam Shostack wrote that NIST has released a public draft of version 1.2 of NIST 800-218, the Secure Software Development Framework, and invited comments by January 30, 2026. He noted that if that doesn’t matter to you, you can ignore it. He mentioned a news story discussing the draft’s view of application security as a journey and expressed a wish that the document frame its focus on software security issues rather than just software vulnerabilities https://shostack.org/blog/nist-800-218-revision/

RedBench: A Universal Dataset for Comprehensive Red Teaming of Large Language Models

The paper introduces RedBench , a unified dataset designed to improve how large language models (LLMs) are evaluated for safety and robustness. Existing red-teaming datasets are inconsistent in how they categorize risks and cover different types of attacks, which makes it hard to systematically test models. RedBench aggregates and standardizes 37 existing datasets into a consistent taxonomy of risk categories and domains, with tens of thousands of samples of both adversarial and refusal prompts. The authors analyze gaps in current datasets, provide baseline evaluations for modern LLMs, and open-source the dataset and evaluation code to support better, more comprehensive LLM safety research.  https://huggingface.co/papers/2601.03699

Ralph Wiggum Loop and the Need for a Principal Skinner Harness

 The article discusses a pattern for autonomous AI agents called the Ralph Wiggum loop , in which a model repeatedly runs in a stateless loop, feeding instructions into itself until a completion condition is met. This approach avoids context rot by resetting the model’s memory each iteration and relying on file systems or version control instead. While persistent iteration can make an agent tireless and effective on long tasks, it also creates governance risks because the agent may continue indefinitely or take harmful actions without supervision. To address this, the author argues that builders need a Principal Skinner harness , a structural control layer that enforces rules, monitors agent behavior, and prevents destructive actions. This harness intercepts and evaluates each tool call, implements deterministic safety controls, and distinguishes agent activity so that organizations can govern long-running autonomous agents safely. https://securetrajectories.substack.com/p/ralph-wi...

SBOMs in 2026: Some Love, Some Hate, Much Ambivalence

Cybersecurity experts remain divided about the value of software bills of materials (SBOMs) in 2026. In theory, SBOMs are praised for improving transparency and helping defenders understand what components make up software, which could aid vulnerability management. In practice, however, they are often messy, inconsistent, hard to generate accurately, and difficult to use at scale. The rapid evolution of software ecosystems and challenges in creating end-to-end verified component records have led to skepticism among some practitioners, while others still see potential if tooling and standards improve. Overall, the debate reflects mixed sentiments about how useful SBOMs actually are for improving security  https://www.darkreading.com/application-security/sboms-in-2026-some-love-some-hate-much-ambivalence

Latin American Organisations Lack Confidence in Cyber Defences

A report from the World Economic Forum shows that organisations in Latin America and the Caribbean have the lowest confidence in their country’s ability to defend critical infrastructure against cyberattacks , with only about 13% expressing confidence while nearly half lack faith in preparedness. This lack of trust reflects broader challenges including a shortage of cybersecurity skills , limited resources, and gaps in governance and infrastructure as digital ecosystems expand rapidly. The shortage of trained professionals is seen as a major factor weakening regional cyber resilience, and efforts to build talent and capability are needed to improve defences as threats grow.  https://www.darkreading.com/cyber-risk/latin-american-confidence-cyber-defenses-skills

CVE-2025-68428 Critical Path Traversal in jsPDF

The article explains a high-severity vulnerability tracked as CVE-2025-68428 in the popular jsPDF JavaScript library used to generate PDF files in web applications. The flaw is a path traversal issue that could allow attackers to craft malicious input enabling access to files outside of intended directories when jsPDF is used in certain server-side or file-serving contexts. If exploited, this can lead to unauthorized file access, potential data leakage, or the ability to include unintended local resources in generated PDFs. The article stresses the importance of updating to patched versions of jsPDF, reviewing use of the library in applications, and applying secure coding and input validation practices to mitigate such critical vulnerabilities before they can be abused in the wild.  https://www.endorlabs.com/learn/cve-2025-68428-critical-path-traversal-in-jspdf

Astronomer Modernizes AppSec with Endor Labs

The article describes how Astronomer , a data engineering company, improved its application security by adopting Endor Labs’ security platform . Astronomer faced challenges securing complex code pipelines, dependencies, and distributed environments using traditional tools. By integrating Endor Labs into its development processes, Astronomer gained automated detection of vulnerabilities , better visibility into risky software components, and real-time feedback for developers. The solution helped the team catch security issues earlier, reduce manual effort, and streamline secure deployment practices. The article highlights how proactive, integrated security tooling can help modern engineering teams protect software without slowing down development.  https://www.endorlabs.com/learn/astronomer-modernizes-appsec-with-endor-labs

AI-Aware Code Review Prevents Breaches

The article explains that traditional code review processes often miss subtle security vulnerabilities, especially as modern applications integrate complex dependencies and AI-generated code. By using AI-aware code review tools that understand security patterns, data flows, and attack techniques, development teams can catch issues earlier and reduce the risk of breaches. These tools analyze code in context, identify risky constructs, and provide guidance that goes beyond simple syntax checks. Integrating AI-driven security analysis into the development lifecycle helps teams improve overall code quality, prevent common coding mistakes that lead to vulnerabilities, and strengthen defenses before software is deployed. Continuous review, training, and automation are highlighted as best practices to make code reviews more effective and reduce the likelihood of security incidents.  https://www.endorlabs.com/learn/ai-aware-code-review-breaches

Ethereum Foundation Launches Post-Quantum Security Team

The Ethereum Foundation has elevated post-quantum cryptographic security to a top strategic priority by forming a dedicated team focused on preparing the blockchain for future quantum computing threats. Quantum computers could one day break the cryptographic algorithms that secure digital wallets and transactions, so the new group will research and develop quantum-resistant solutions to safeguard the network’s integrity and the value it protects. The initiative includes funding, developer sessions, community engagement, and collaboration on test networks aimed at building and implementing post-quantum cryptography well before such threats materialize, reflecting a proactive effort to future-proof Ethereum  https://cryptorank.io/news/feed/68046-ethereum-quantum-security-post-quantum-team

Do You Still Need Antivirus Software on Windows

Experts say that antivirus software remains important, but for many everyday users running Windows 11 the built-in Microsoft Defender security suite is often enough on its own. Defender provides real-time protection and frequent updates that compete with paid products, and when combined with sensible browsing habits it can protect most personal computers without extra software. However, for businesses or individuals handling sensitive data or facing more advanced threats like ransomware and phishing, additional specialized antivirus protection beyond Windows Security is still recommended.  https://www.bgr.com/2083446/windows-antivirus-necessary-according-experts/

ThreatModeler Acquires Competing Threat Modeling Startup IriusRisk

The article reports that ThreatModeler, a provider of threat modeling software, has acquired IriusRisk, a competing startup in the same space. This consolidation brings together two established tools used by organizations to identify, manage, and mitigate security threats in software and systems. The combined company aims to leverage the strengths of both platforms to offer broader capabilities and better support for enterprise customers, reducing manual effort and improving security workflows. The acquisition reflects ongoing demand for integrated security solutions as organizations seek to build secure software more efficiently.  https://siliconangle.com/2026/01/08/threatmodeler-acquires-competing-threat-modelling-startup-iriusrisk/

Software Supply Chain Security Is More Than Open Source

The webinar explains that focusing only on open source vulnerabilities is not enough to secure a software supply chain. While open source components are a critical part of modern development, there are other blind spots that also need attention. These include ensuring the integrity of build artifacts, securing development and deployment pipelines, protecting container images, and addressing emerging risks from components such as AI models. Effective software supply chain security requires a broader approach that goes beyond identifying open source flaws and includes securing all parts of the software delivery process, from code through deployment https://www.govinfosecurity.com/webinars/webinar-software-supply-chain-security-more-than-open-source-w-6759

Audio Accessory Flaw Turns Headphones into Spy Tools

A security flaw dubbed WhisperPair affects how many Bluetooth audio accessories implement Google’s Fast Pair protocol, allowing an attacker to force a wireless accessory such as headphones or earbuds to pair with a malicious device even when not in pairing mode. Once paired, an attacker could activate the microphone to eavesdrop on conversations, play sounds through the headphones, or track the victim’s location using features like device geolocation tracking. The vulnerability works at realistic distances without physical access and is present in products from multiple manufacturers. Fixing it requires firmware patches from the accessory makers, and updating the phone’s operating system alone may not protect users  https://www.govinfosecurity.com/audio-accessory-flaw-converts-headphones-into-spy-tool-a-30595

Why AI Keeps Falling for Prompt Injection Attacks

The article explains that prompt injection attacks remain a persistent vulnerability in AI systems because the foundational design of large language models lacks true understanding or control over how instructions are interpreted. Prompt injection works by embedding malicious directives into user input that the model then executes, often unintentionally. These attacks exploit the fact that AI models treat all text in a prompt as guidance, making it difficult to distinguish between legitimate instructions and harmful ones. Defensive measures like input sanitization, context filtering, and strict output controls help to some extent, but don’t fully solve the problem because models are built to follow the user’s words. The article argues that prompt injections are not bugs but a structural weakness of current AI architectures , and that meaningful mitigation will require rethinking how AI systems interpret and enforce boundaries between safe and unsafe instructions.  https://www.schn...

Congratulations to ThreatModeler and IriusRisk

The blog post celebrates the merger of two enterprise-grade threat modeling software companies, ThreatModeler and IriusRisk, into a single organization. The author welcomes this as a positive start to 2026 and explains that enterprise tools differ from simpler options by enabling issue tracking, change management, and better visibility into security work, reducing manual effort and freeing analysts to focus on higher-value tasks. He expresses enthusiasm for the combined team and the product development potential of uniting the strengths of both companies.  https://shostack.org/blog/congratulations-to-threatmodeler/

The ROI Problem in Attack Surface Management

The article discusses how many organizations struggle to show a clear return on investment for attack surface management (ASM) programs despite increasing risk exposure. As digital environments grow in complexity, security teams are expected to continuously discover, monitor, and reduce exposures across assets, cloud resources, credentials, APIs, and internet-facing services. However, ASM often generates large volumes of findings that are hard to prioritize, with business leaders questioning the value because it is difficult to link surface reduction directly to risk reduction or financial impact. The piece highlights the need for better metrics that align ASM outcomes with business priorities, actionable insights that help teams fix the most critical weaknesses, and a shift from raw discovery toward risk-based decision making. Without clear indicators of cost savings or risk reduction, investment in ASM can be hard to justify to executives. The article argues that security teams shoul...

The State of Trusted Open Source Software

The article explains that while open source software is widely used and valued for transparency and collaboration, trust in its security and reliability remains a concern. Many open source projects lack formal maintenance, governance, resources, or clear accountability, which can lead to vulnerabilities and unpatched issues. Organizations often depend heavily on community-maintained libraries without knowing who is responsible for updates or long-term support. The piece discusses efforts to improve the ecosystem by encouraging funding models, stronger governance structures, security auditing, and clearer ownership, so that critical open source components can be more dependable and sustainable as part of modern software infrastructure.  https://thehackernews.com/2026/01/the-state-of-trusted-open-source.html

Why Secrets in JavaScript Bundles Are Still Being Missed

 Many modern web applications accidentally expose sensitive information such as API keys, tokens, and credentials inside JavaScript bundles delivered to browsers. Large-scale scans have shown that tens of thousands of secrets are publicly accessible because traditional security tools often do not inspect bundled JavaScript thoroughly. Static analysis, infrastructure scanning, and dynamic testing commonly miss these exposures, especially in single-page applications and automated build pipelines. As a result, attackers can gain access to internal systems, repositories, and services. The article argues that organizations need dedicated detection focused on JavaScript bundles before deployment, since existing controls and reviews are not sufficient to prevent these leaks. https://thehackernews.com/2026/01/why-secrets-in-javascript-bundles-are.html

The Hidden Risk of Orphan Accounts

Orphan accounts are identities that remain active after their original owners, such as employees, contractors, services, or automated processes, are no longer present or accountable. These accounts often persist unnoticed due to fragmented identity systems and poor visibility, retaining credentials and sometimes elevated privileges. Because they lack clear ownership, orphan accounts are difficult to audit and easy for attackers to exploit, creating security, compliance, and operational risks. Reducing this threat requires continuous visibility into all identities, clear ownership, and automated processes to detect and remove accounts that are no longer needed.  https://thehackernews.com/2026/01/the-hidden-risk-of-orphan-accounts.html

Who Is Responsible for AI Agents

AI agents are increasingly acting autonomously inside organizations, accessing systems and making decisions without clear approval or ownership. As they evolve, they often accumulate excessive permissions, creating accountability gaps and security risks. Traditional identity and access models are not designed for these agents, allowing them to bypass controls and act beyond user authority. To reduce risk, organizations must treat AI agents as first-class identities, with defined owners, limited access, and continuous monitoring.  https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html