Posts

Showing posts from June, 2025

Major AI & Robotics Moves: IBM Opens NYC Hub and Honor’s $10B Humanoid Robot Plan

IBM has launched watsonx AI Labs , a developer-focused innovation hub in Manhattan that connects startups with IBM’s researchers, engineers, and ventures. This center supports the development of “agentic AI” systems tailored for sectors like customer service, supply chain, cybersecurity, and responsible AI. The initiative also brings onboard technology from Seek AI, acquired to power enterprise data agents within the lab, and offers mentorship and access to a $500 million Enterprise AI Venture Fund over the next five years. Concurrently, Chinese smartphone maker Honor has unveiled an ambitious $10 billion AI strategy to evolve from smartphones into a comprehensive AI-device ecosystem. As part of this plan, Honor intends to build its own humanoid robots and collaborate with partners in the robotics space. Its AI-powered system has already helped Unitree Robotics set new running-speed records for humanoids, showcasing how Honor's investment is fueling innovation in embodied AI.  htt...

CAI: Comprehensive Open-Source Framework for AI Safety Testing in Robotics

CAI is an open-source toolkit developed by Alias Robotics for analyzing and testing the safety of robotic systems powered by artificial intelligence. It offers a modular architecture to simulate and evaluate AI behaviors in robotics environments, emphasizing risk detection and automated verification. With customizable scenarios, runtime monitors, and integration plugins, CAI enables developers to assess robot decision-making under diverse, potentially hazardous conditions. The framework supports both offline simulations and real-time operation, facilitating proactive identification of unsafe states, control anomalies, or unintended actions. By equipping robotics teams with automated testing and assessment capabilities, CAI promotes stronger safety assurance practices within the AI robotics development lifecycle.  https://github.com/aliasrobotics/cai

GitHub Elevates Code Provenance to Defend Against Supply Chain Attacks

In a recent discussion at Gartner’s Security & Risk Management Summit, GitHub’s Jennifer Schelkopf highlighted the growing hazard of software supply chain attacks—an issue forecasted to impact nearly half of all organizations by year’s end—as threat actors increasingly target popular open‑source components. She explained that inspecting the origin of code artifacts can significantly disrupt such attacks by eliminating implicit trust in builds. Schelkopf emphasized the use of the Supply-chain Levels for Software Artifacts (SLSA) framework, which provides structured integrity controls through artifact attestation—detailing where, how, and by whom code was built. She pointed to Sigstore and Kubernetes’ OPA Gatekeeper as key tools that automate signing and verification within CI/CD pipelines, ensuring any tampering is caught before deployment. Provenance and attestation shift software development from a trust-based model to a trust-verified one. According to Schelkopf, rigged builds—...

Critical SQL Injection in LlamaIndex (CVE-2025-1793): Exposing LLM‑Driven Backdoor Risks

LlamaIndex, a popular framework for connecting large language models to vector stores, was found to contain a critical SQL injection vulnerability, CVE-2025-1793. This flaw stemmed from unsanitized inputs flowing from LLM-generated prompts into database queries via methods like vector_store.delete() . In a typical scenario, a user’s natural language request could be transformed by the LLM into a malicious SQL command—such as "project:X' OR 1=1 --" —leading to unauthorized data deletion, exposure, or manipulation. The vulnerability affects multiple vector store integrations (ClickHouse, Couchbase, DeepLake, Jaguar, Lantern, Nile, OracleDB, SingleStoreDB) and has been addressed in LlamaIndex version 0.12.28. Patches include input sanitization, though rigor varies across database types. The advisory highlights a broader risk: when LLMs encode backend operations without proper sanitization, they can create hidden attack vectors. Developers are urged to apply the patch and imp...

CSA Playbook Empowers Continuous Red‑Teaming of Agentic AI Systems

The Cloud Security Alliance has released a comprehensive guide designed to help security professionals and AI engineers rigorously test autonomous AI agents deployed in sensitive environments. Unlike traditional generative models, agentic AI systems autonomously plan, decide, and act in real-world or virtual contexts, creating fresh attack surfaces in areas such as orchestration logic, persistent memory, and control flows. The guide identifies twelve specific threat categories—including permission hijacking, oversight bypass, goal manipulation, memory poisoning, multi-agent collusion, and source obfuscation—and offers structured test scenarios, red‑team objectives, evaluation metrics, and mitigation approaches for each. It builds on frameworks like CSA’s MAESTRO and OWASP’s AI Exchange, and recommends both open‑source and commercial tools, emphasizing that red‑teaming must be an ongoing, integrated practice throughout the AI development lifecycle.  https://campustechnology.com/arti...

Public Sector Software Vulnerabilities Persist, Widening Security Gap

Applications developed by public sector organizations suffer from significantly more long-standing security flaws than those in the private sector, with 59 percent of public-sector apps carrying vulnerabilities older than a year compared to 42 percent industry-wide. These enduring flaws, caused by neglected patching and configuration weaknesses, accumulate as "security debt" over decades. With such persistence, public services remain highly exposed to threats, underscoring the urgent need for targeted investment, prioritization of secure-by-default practices, and policy support to bring public-sector software up to the security standards commonly found in the private sector.  https://www.helpnetsecurity.com/2025/06/13/public-sector-software-vulnerabilities/

Azul Enhances Java Security with Precision Runtime Vulnerability Detection

Azul’s Intelligence Cloud now includes a runtime vulnerability detection feature that analyzes class-level execution data to identify actual usage of vulnerable code within Java applications. This method significantly reduces false positives—by up to 99%—compared to traditional tools that flag entire components based solely on SBOM or file presence. Leveraging AI-updated knowledge of CVEs mapped to specific Java classes, the system continuously monitors both current and historical runtime behavior, allowing DevOps teams to efficiently triage and prioritize real security risks with no performance impact. The update empowers organizations to reclaim valuable development time, focus on true threats, and enhance their overall security posture.  https://securitybrief.co.nz/story/azul-boosts-java-security-with-improved-runtime-vulnerability-detection

Unit 42 Develops Agentic AI Attack Framework

Unit 42's research introduces a framework illustrating how Agentic AI—autonomous systems capable of independent decision-making—can be weaponized to enhance the speed, scale, and sophistication of cyberattacks. By automating tasks such as reconnaissance, exploitation, and data exfiltration, these AI agents can execute attacks with minimal human intervention. The study highlights a significant reduction in the mean time to exfiltrate (MTTE) data, dropping from nine days in 2021 to two days in 2024, with some incidents occurring in under an hour. Real-world examples include the use of deepfake technology for social engineering, AI-assisted ransomware negotiations, and AI-powered productivity assistants identifying sensitive credentials. The research emphasizes the need for organizations to adapt their cybersecurity strategies to defend against these rapidly evolving threats   https://www.paloaltonetworks.com/blog/2025/05/unit-42-develops-agentic-ai-attack-framework

Comprehensive Guide to JWT Vulnerabilities and Exploits

PentesterLab's in-depth guide on JSON Web Token (JWT) vulnerabilities offers a thorough examination of common implementation flaws that can lead to serious security breaches. The guide covers a range of issues, including the failure to verify signatures, algorithm confusion attacks, and improper handling of key identifiers. It provides detailed examples of how these vulnerabilities can be exploited, such as crafting malicious tokens to bypass authentication or authorization mechanisms. Additionally, the guide emphasizes the importance of secure coding practices and offers practical advice on mitigating these risks. For hands-on learning, it links to exercises that allow readers to practice exploiting and defending against these vulnerabilities in a controlled environment   https://pentesterlab.com/blog/jwt-vulnerabilities-attacks-guide

Enhancing Vulnerability Prioritization: NIST's Proposed Metric for Likely Exploited Vulnerabilities

NIST's Cybersecurity White Paper (CSWP) 41 introduces a new metric aimed at assessing the likelihood that a vulnerability has been actively exploited. This initiative addresses the limitations of existing tools like the Exploit Prediction Scoring System (EPSS), which has known inaccuracies, and the Known Exploited Vulnerability (KEV) lists, which may lack comprehensiveness. By incorporating community-provided probabilities, the proposed metric seeks to provide a more accurate and comprehensive approach to vulnerability remediation efforts. The paper emphasizes the need for collaboration with industry partners to validate and refine this metric, ensuring its effectiveness in real-world applications.   https://csrc.nist.gov/pubs/cswp/41/likely-exploited-vulnerabilities-a-proposed-metric/final

Unveiling Malicious npm Packages Through Dynamic Analysis Signals

SafeDep's article explores how dynamic analysis can reveal complex attack chains in open-source packages, complementing static analysis methods. By monitoring runtime behaviors such as network connections and binary executions during package installation, the study identifies abnormal activities indicative of potential threats. A case study on the eslint-config-airbnb-compat package demonstrates how dynamic analysis uncovered a multi-stage remote code execution attack that static analysis had missed. The findings underscore the importance of integrating dynamic analysis into security practices to enhance the detection of sophisticated malicious activities in the software supply chain.   https://safedep.io/digging-into-dynamic-malware-analysis-signals

Evaluating the Accuracy of Metadata-Based SBOM Generation

  The paper "On the Correctness of Metadata-Based SBOM Generation" presents a large-scale analysis of four widely used Software Bill of Materials (SBOM) generators—Trivy, Syft, Microsoft SBOM Tool, and GitHub Dependency Graph. The study examines 7,876 open-source projects across various programming languages, revealing that all four tools produce inconsistent SBOMs with missing dependencies, leading to incomplete and potentially inaccurate software inventories. Additionally, the authors introduce a novel attack vector termed "parser confusion," which exploits these inconsistencies to conceal malicious or vulnerable packages within the software supply chain. To address these issues, the paper proposes best practices for SBOM generation and introduces a benchmark to guide the development of more accurate and reliable SBOM tools https://www.cs.ucr.edu/~heng/pubs/sbom-dsn24.pdf

Addressing the Cybersecurity Poverty Line: A Call for Inclusive Solutions

The article "Lifting the World Out of the Cybersecurity Poverty" delves into the concept of the "cybersecurity poverty line," highlighting the disparity between large enterprises with robust security measures and smaller organizations lacking adequate protection. It emphasizes that many small and medium-sized businesses (SMBs), educational institutions, and local governments are often excluded from the cybersecurity market due to cost barriers and a lack of tailored solutions. The authors argue that the prevalent "trickle-down" approach to cybersecurity, which focuses on securing large corporations, is insufficient. They advocate for a paradigm shift where technology vendors prioritize building secure-by-default products accessible to organizations with limited resources. Additionally, they stress the importance of creating a more inclusive cybersecurity ecosystem that extends beyond the top-tier enterprises to encompass the broader spectrum of organiz...

Curated Resource Hub for Cybersecurity in Agentic AI Systems

The "Awesome Cybersecurity Agentic AI" repository is a curated collection of resources aimed at enhancing the security of autonomous AI systems. It encompasses a diverse range of materials, including research papers, tools, frameworks, datasets, and community discussions, all focused on the intersection of cybersecurity and agentic AI. This repository serves as a valuable reference for professionals and researchers seeking to understand and address the unique security challenges posed by AI systems capable of autonomous decision-making and actions.   https://github.com/raphabot/awesome-cybersecurity-agentic-ai

Privado: Tool for Mapping Personal Data Flows in Code

Privado is an open-source static analysis tool that helps developers detect and track the flow of personal data within software. It identifies over a hundred types of personal data as they move through code to external services, databases, logs, and APIs. Supporting Java and Python, with plans for JavaScript, Privado provides visual dashboards to aid in privacy audits and compliance efforts. It also facilitates the generation of data safety reports, helping organizations better manage data privacy and security.  https://github.com/Privado-Inc/privado

OWASP AIVSS: Framework for AI Vulnerability Scoring and Security Assessment

The OWASP Agentic AI Vulnerability Scoring System (AIVSS) is an emerging framework designed to assess and quantify security risks in AI systems, including large language models and generative AI applications. Aimed at developers, security professionals, and organizations, the AIVSS provides a structured methodology to identify, evaluate, and mitigate vulnerabilities specific to AI technologies. The initiative includes an interactive scoring calculator, standardized assessment templates, and comprehensive documentation to guide users through the vulnerability assessment process. By offering a quantifiable approach to AI security, the AIVSS aims to enhance the resilience of AI systems against evolving threats https://aivss.owasp.org/

A Leaner Path to Secrets Detection in Code

Wiz has introduced an efficient approach to detecting secrets in source code by fine-tuning a small language model based on LLaMA-3.2-1B. This lightweight model achieves high accuracy—86% precision and 82% recall—while avoiding the drawbacks of traditional regex methods and the resource demands of large language models. By using a smart training pipeline that leverages larger models to label high-quality datasets, combined with LoRA fine-tuning and quantization, Wiz created a compact model that runs effectively on standard CPUs. This innovation enables faster, more scalable, and privacy-conscious secret detection that integrates easily into development workflows, helping organizations reduce false positives and improve code security at scale.  https://www.wiz.io/blog/small-language-model-for-secrets-detection-in-code

The Rise of the AppSec Exploitation Era

Recent findings from leading cybersecurity reports indicate a major shift in how attackers operate: active exploitation of software vulnerabilities has now surpassed phishing and stolen credentials as the primary method of initial compromise. This trend reveals a critical challenge for organizations—security teams are overwhelmed by an ever-growing backlog of known vulnerabilities and often lack the context needed to prioritize and remediate effectively. With new CVEs increasing by approximately 30% annually and legacy issues remaining unresolved, defenders are struggling to keep up. This new "Exploitation Era" demands a transformation in application security, emphasizing smarter vulnerability management, automation, and proactive remediation strategies to meet the pace and precision of modern threats.  https://www.endorlabs.com/learn/appsecs-exploitation-era-what-verizon-mandiant-and-datadog-are-telling-us

AI Takes Center Stage in the Enterprise Software Landscape

The 2025 SVB report on enterprise software reveals a sharp 43% rise in venture capital investment in the U.S. sector during 2024, driven primarily by artificial intelligence. AI and machine learning are now central to startup formation and investor enthusiasm, featuring in one out of every six enterprise software VC deals—double the pre-pandemic rate. The number of VC-backed enterprise software unicorns has grown to 307, accounting for 40% of all U.S. unicorns, up from 31% five years ago. These companies are reaching billion-dollar valuations in just over three years, significantly faster than past cohorts, largely due to soaring valuations and investor urgency around AI. At the same time, nearly a quarter of all exits now occur at the seed stage, reflecting a saturated early-stage market and mounting difficulty in securing Series A rounds. The report concludes that while AI continues to dominate investment narratives, the long-term value of enterprise software innovation will depend o...

JPMorgan Urges Suppliers to Prioritize Security and Modernization

In an open letter to third-party suppliers, JPMorgan emphasizes the critical need for software providers to prioritize security over rapid feature deployment. The firm calls for a modernization of security architectures to optimize protection against evolving cyber threats. This initiative underscores JPMorgan's commitment to enhancing the resilience of its supply chain by fostering stronger security practices among its partners.  https://www.jpmorgan.com/technology/technology-blog/open-letter-to-our-suppliers

2025 Cybersecurity Pulse Report: Strategic Insights from RSAC

The 2025 Cybersecurity Pulse Report, published by ISMG, offers a comprehensive analysis of the evolving cybersecurity landscape, drawing from over 150 expert interviews and sessions from the RSAC 2025 Conference. The report highlights emerging threats such as AI-driven malware, synthetic identity fraud, and challenges facing Security Operations Centers (SOCs). It also delves into critical areas including agentic AI, cloud security, quantum computing, and identity and access management. Serving as a strategic guide, the report provides security leaders with data-backed insights to benchmark their strategies, validate roadmaps, and enhance resilience in the face of escalating digital threats.  https://www.govinfosecurity.com/2025-cybersecurity-pulse-report-a-28529

Addressing the Deepfake Challenge Posed by Google's Veo 3

The article discusses the implications of Google's AI tool, Veo 3, which can generate highly realistic videos from text prompts. While the technology is praised for its capabilities, it also raises concerns about the potential misuse in creating convincing deepfakes. The author emphasizes the need for proactive measures to address the threats to truth and authenticity posed by such advancements in AI-generated media.  https://www.govinfosecurity.com/blogs/how-we-solve-insane-deepfake-video-problem-p-3877

VulnCheck Launches THREATCON1 to Advance Cyber Threat Intelligence

VulnCheck has announced the inaugural THREATCON1 security conference, aiming to unite researchers, analysts, and cybersecurity professionals to showcase cutting-edge research and real-world threat response strategies. The event is designed to increase collaboration and knowledge sharing within the cybersecurity community, focusing on the latest developments in exploit intelligence and threat mitigation.   https://finance.yahoo.com/news/vulncheck-announces-inaugural-threatcon1-security-171800634.html

Implementing Secure by Design Principles for AI

The article emphasizes the necessity of integrating security measures throughout the AI development lifecycle, rather than applying them post-deployment. It highlights that traditional security tools are inadequate for AI systems due to their dynamic and probabilistic nature, which introduces unique vulnerabilities like data poisoning and prompt injection. To address these challenges, the article advocates for a Secure by Design approach, as recommended by the Cybersecurity and Infrastructure Security Agency (CISA), ensuring that security is embedded at every stage of AI system development. This proactive strategy aims to build trust and resilience in AI technologies by anticipating and mitigating potential threats from the outset.   https://www.darkreading.com/vulnerabilities-threats/secure-design-principles-ai

SonarQube Advanced Security: Unified Developer-First Protection for All Code

SonarSource has announced the general availability of SonarQube Advanced Security, an integrated solution designed to enhance both code quality and security within the developer workflow. This release extends SonarQube's capabilities to include comprehensive analysis of first-party, AI-generated, and third-party open-source code. Key features encompass advanced Static Application Security Testing (SAST), Software Composition Analysis (SCA), secrets detection, and Infrastructure as Code (IaC) scanning. By consolidating these tools, SonarQube aims to reduce alert fatigue and streamline the identification and remediation of vulnerabilities, ensuring robust protection across the entire software supply chain.  https://securityboulevard.com/2025/05/sonarqube-advanced-security-now-available-developer-first-security-for-all-code/#google_vignette

OWASP ASVS 5.0 Released - Key Updates and What You Need to Know

The OWASP Foundation has released version 5.0 of the Application Security Verification Standard (ASVS), a major update to their security framework for web applications. This new version features restructured security requirements for better clarity, expanded guidance for cloud and API security, improved DevSecOps integration for CI/CD pipelines, updated threat modeling support, and enhanced compliance mapping with standards like NIST and PCI DSS. ASVS serves as a critical benchmark for developers building secure applications, penetration testers conducting security assessments, and auditors performing compliance reviews.  The standard is available for download from the OWASP ASVS project page, with organizations encouraged to integrate it into their software development lifecycles through code reviews and security testing tools. As a vendor-neutral, community-driven project, OWASP continues to welcome contributions to further develop the standard. This release represents an importa...

Benchmarking OpenGrep: Performance Improvements Explained

This article from *Endor Labs* evaluates the performance enhancements in OpenGrep, an open-source code search tool. It compares the tool's speed and efficiency before and after recent optimizations, demonstrating measurable improvements in large-scale codebase searches. The benchmark results highlight how algorithmic refinements and engineering efforts have reduced search times while maintaining accuracy. The piece serves as a technical case study for developers interested in code search optimization and showcases OpenGrep's growing capabilities as a developer productivity tool.  https://www.endorlabs.com/learn/benchmarking-opengrep-performance-improvements

Meet Burp Suite DAST: Your Questions Answered

This article from *PortSwigger* introduces Burp Suite DAST (Dynamic Application Security Testing), a new tool designed to help security professionals identify vulnerabilities in web applications. The post answers common questions about its features, use cases, and how it complements existing Burp Suite offerings. It highlights the tool's ability to automate security testing, integrate with development workflows, and provide actionable insights to improve application security. The article serves as a guide for users looking to understand and adopt Burp Suite DAST in their security practices.  https://portswigger.net/blog/meet-burp-suite-dast-your-questions-answered

In a Polarising World, Cyber Security Faces ‘The Great Sorting’

This article from *ITWeb* discusses how increasing global polarization is impacting cybersecurity, leading to what experts call "The Great Sorting"—a fragmentation of the digital landscape into competing blocs with differing standards and regulations. The piece explores the challenges this poses for businesses, governments, and security professionals, including issues like data sovereignty, supply chain risks, and geopolitical tensions shaping cyber threats. Published recently, the article highlights the need for adaptive strategies to navigate this evolving and divided cybersecurity environment.  https://www.itweb.co.za/article/in-polarising-world-cyber-security-faces-great-sorting/LPp6VMrBJm6MDKQz

Cryptanalyzing LLMs with Nicholas Carlini

A blog post from Security, Cryptography & Whatever discusses Nicholas Carlini's research into cryptanalysis techniques applied to large language models (LLMs). The article explores how vulnerabilities in LLMs can be exploited, including potential attacks that manipulate model outputs or extract sensitive training data. Carlini's work highlights security risks in modern AI systems and underscores the need for robust defenses in machine learning architectures. Published on January 28, 2025, the piece serves as an accessible overview of cutting-edge AI security challenges for researchers and practitioners.  https://securitycryptographywhatever.com/2025/01/28/cryptanalyzing-llms-with-nicholas-carlini/

Post-Quantum Cryptography Migration Roadmap

The Post-Quantum Cryptography Consortium (PQCC) provides a roadmap for transitioning to quantum-resistant cryptographic systems, addressing the threat posed by quantum computing to current encryption standards. The guide outlines steps for organizations to assess risks, prioritize systems, and implement post-quantum algorithms, emphasizing early preparation due to the long migration timeline. The resource aims to help industry, government, and academia adopt secure cryptographic practices before quantum computers become capable of breaking traditional encryption. The roadmap underscores collaboration and standardization efforts led by NIST to ensure a smooth and secure transition.  https://pqcc.org/post-quantum-cryptography-migration-roadmap/

Vulnerabilities in CISA KEV Are Not Equally Critical, Report Finds

A report reveals that not all vulnerabilities listed in CISA's Known Exploited Vulnerabilities (KEV) catalog are equally critical, despite being flagged as actively exploited. The analysis highlights inconsistencies in severity ratings, with some entries posing minimal risk while others demand urgent attention. The findings suggest the need for more precise prioritization to help organizations allocate resources effectively. Published by  SecurityWeek , the article underscores the challenges in vulnerability management and the importance of refining threat intelligence frameworks.   https://www.securityweek.com/vulnerabilities-in-cisa-kev-are-not-equally-critical-report/

What is DevSecOps and How AI is Transforming Careers in IT

The article explains DevSecOps, an approach integrating security (Sec) into DevOps practices to ensure secure software development from the start. It highlights how AI is revolutionizing DevSecOps by automating security checks, detecting vulnerabilities faster, and improving efficiency. The piece also discusses the growing demand for professionals skilled in AI-powered DevSecOps, emphasizing career opportunities in IT due to this technological shift. Published on May 27, 2025, by  India Today , it serves as an educational resource for those interested in modern software development and cybersecurity trends.   https://www.indiatoday.in/education-today/story/what-is-devsecops-and-how-ai-is-transforming-careers-in-it-2730206-2025-05-27

Cyber Canon - Essential Reading for Cybersecurity Professionals

The Cyber Canon is a curated list of essential books and resources for cybersecurity professionals, compiled by experts to provide foundational and advanced knowledge in the field. It includes recommendations for must-read books, articles, and other materials that cover topics such as hacking, cyber warfare, privacy, and risk management. The goal is to guide readers toward high-quality, influential works that shape understanding and practice in cybersecurity. The list is maintained by the Cybersecurity Canon Project, which aims to identify works that are timeless, impactful, and valuable for both newcomers and seasoned professionals.   https://cybercanon.org/

What Most Security Teams Miss: An Engineering Manager’s Take on AppSec with Desmond Lamptey

Image
The interview features Desmond Lampy, a seasoned software engineering manager, discussing his journey into becoming a "security champion"—a developer who actively advocates for and contributes to secure coding practices. He explains that traditional security labels like “medium” or “low” often confuse developers about the true urgency of a vulnerability, leading to delays or negligence in remediation.  Desmond emphasizes that fostering a culture of security within development teams requires more than mandates; it requires making security enjoyable, relatable, and integrated into everyday workflows. His team succeeded in doing this by gamifying security education, using tools like Secure Code Warrior and rewarding engagement through badges and progression levels.  He highlights that success came not from reducing all vulnerabilities, which is unrealistic, but from increasing awareness and the quality of mistakes, showing developers were thinking differently. He reflects on how...

PLOT4AI: Practical Library Of Threats for Artificial Intelligence

PLOT4AI is a comprehensive threat modeling methodology designed to help developers and organizations build trustworthy AI systems by identifying and mitigating risks across the AI lifecycle. The library encompasses 138 AI-related threats categorized into eight domains: Data & Data Governance, Privacy & Data Protection, Bias, Fairness & Discrimination, Safety & Environmental Impact, Cybersecurity, Ethics & Fundamental Rights, Transparency & Accessibility, and Accountability & Oversight. Developed by privacy engineer and AI advisor Isabel Barberá, PLOT4AI draws inspiration from LINDDUN GO and has evolved since its initial release in 2022 to address the rapidly changing landscape of AI risks. The resource offers a digital card deck, assessment tools, and downloadable materials to facilitate threat modeling sessions, aiming to bridge the gap between technical design and ethical oversight in AI development   https://plot4.ai/

Security Is Just Engineering Tech Debt (And That's a Good Thing)

The article argues that security vulnerabilities should be viewed as a form of technical debt, akin to software quality issues, rather than as separate, specialized concerns. It emphasizes that many security flaws stem from common engineering shortcomings like poor input validation, inadequate error handling, and misconfigurations. By integrating security considerations into standard engineering practices and treating them as part of the software development lifecycle, organizations can address vulnerabilities more effectively. The author advocates for a shift in mindset where security is seen as an integral aspect of software quality, enabling more proactive and efficient risk management   https://srajangupta.substack.com/p/security-is-just-engineering-tech

Hacking LLM Applications: A Meticulous Hacker’s Two Cents

The author, Ads Dawson, shares insights into exploiting Large Language Model (LLM) applications by manipulating prompts to bypass filters, extract sensitive data, and induce unintended behaviors. He emphasizes the importance of understanding the underlying models and their training data to identify vulnerabilities. The article advocates for a meticulous approach to testing LLMs, highlighting the need for continuous evaluation and adaptation of security measures as these models evolve.   https://www.bugcrowd.com/blog/hacking-llm-applications-a-meticulous-hackers-two-cents

Security Slows Down Change

The article discusses how security processes, particularly change management, can impede organizational agility by introducing bureaucratic hurdles. It emphasizes the importance of integrating security into the development lifecycle to streamline changes without compromising safety. The author advocates for a shift from gatekeeping to enabling secure innovation, suggesting that security teams should focus on facilitating change rather than obstructing it. By adopting a more collaborative approach, security can support faster and safer deployments.   https://boringappsec.substack.com/p/edition-29-security-slows-down-change

Quantifying AI’s Impact on Data Risk

Varonis' 2025 State of Data Security Report, analyzing over 1,000 real-world IT environments, reveals that 90% of organizations have sensitive data exposed to AI systems, 88% maintain dormant "ghost" user accounts with active access, and 98% utilize unsanctioned AI applications without security oversight. The report highlights a significant gap between rapid AI adoption and lagging security measures, emphasizing the need for integrated security practices to mitigate risks associated with data exposure, unauthorized access, and the proliferation of shadow AI tools.   https://www.resilientcyber.io/p/quantifying-ais-impact-on-data-risk

Threat Modeling with LLMs: Two Years In – Hype, Hope, and a Look at Gemini 2.5 Pro

After two years of exploring AI for threat modeling, the author evaluates Gemini 2.5 Pro using a deliberately vague architecture for a fictional project called “AI Nutrition Pro.” Employing the open-source AI Security Analyzer tool with a STRIDE-based prompt, the model generates a comprehensive threat model. Gemini 2.5 Pro demonstrates strong reasoning capabilities, accurately identifying assets, data flows, and specific threats like prompt injection and API key misuse. However, it shows limitations in defining trust boundaries. The author notes the potential bias due to publicly available prior models and plans to use unpublished datasets in future evaluations. Overall, the post highlights the advancements and remaining challenges in integrating LLMs into the threat modeling process   https://xvnpw.github.io/posts/threat-modeling-with-llms-two-years-in-hype-hope-and-a-look-at-gemini-2.5-pro

Why Prompts Are the New IOCs You Didn't See Coming

The article discusses how prompts used in Large Language Models (LLMs) are becoming critical indicators of compromise (IOCs) in cybersecurity. It highlights cases where threat actors exploited LLMs like Claude for malicious activities such as influence operations, credential stuffing, recruitment fraud, and malware development. The author notes the lack of detailed indicators in reports, emphasizing the importance of analyzing adversarial prompts. To address this, the NOVA framework is introduced as a tool for detecting and hunting malicious prompts, enabling security teams to proactively monitor and mitigate AI-related threats.   https://blog.securitybreak.io/why-prompts-are-the-new-iocs-you-didnt-see-coming-46ecaacafe0a

Hardening GitHub Actions: Lessons from Recent Attacks

This guide examines recent supply chain attacks exploiting GitHub Actions, such as the Ultralytics cryptominer incident and the tj-actions compromise, highlighting vulnerabilities like excessive permissions and unverified third-party actions. It recommends security measures including setting default workflow permissions to read-only, restricting actions to verified sources, enforcing branch protection rules, and managing secrets with least privilege. The guide emphasizes avoiding unsafe practices like using high-privilege triggers ( pull_request_target ), exposing all secrets via toJson(secrets) , and persisting credentials unnecessarily. It also advises against using self-hosted runners with public repositories due to security risks. Tools like zizmor, gato, and allstar are suggested for auditing and enforcing security policies.   https://www.wiz.io/blog/github-actions-security-guide

Building Uber’s Multi-Cloud Secrets Management Platform

Uber developed a centralized, automated, and scalable secrets management platform to address the challenges of managing over 150,000 secrets across 5,000 microservices, databases, and numerous third-party integrations. To prevent secrets sprawl and enhance security, they implemented preventive measures like a Git pre-commit hook CLI tool to block secret leaks at the source and remediation strategies including real-time and scheduled scanning of code repositories, Slack conversations, and build logs. The platform enforces a Secret Management Standard requiring all secrets to be stored in approved vaults managed by a dedicated Secrets team. Collaborations with internal stakeholders led to the development of systems like the Secure Secret eXchange (SSX), and the platform continues to evolve with features like IDE-integrated security copilots to proactively assist developers   https://www.uber.com/en-AU/blog/building-ubers-multi-cloud-secrets-management-platform