Posts

Showing posts from October, 2025

LLM Code Review vs Deterministic SAST Security Tools

The article compares large language model (LLM)‑based code review with traditional deterministic static application security testing (SAST) tools. It highlights that while SAST tools like Semgrep and Checkov are effective for enforcing explicit security policies, they often struggle with subjective or complex scenarios, leading to either false positives or missed vulnerabilities. In contrast, LLMs can evaluate code more flexibly, identifying potential issues that may not be easily captured by predefined rules. The author discusses the benefits and limitations of both approaches, suggesting that integrating LLMs into the security workflow can complement traditional tools by addressing scenarios that require nuanced understanding.  https://blog.fraim.dev/ai_eval_vs_rules/

Mobile apps expose sensitive data and create privacy risks

A new analysis of 50,000 mobile apps shows that more than 77 percent contain personal data, and many iOS apps omit or misdeclare third-party components, violating transparency rules. The report also finds that 35 percent of iOS apps fail to disclose observed data collection, and 10 percent of Android apps omit required data safety details. Among 183,000 apps reviewed, 18.3 percent use AI, and several thousand transmit information to AI endpoints—raising new risks of sensitive data leaks and intellectual property exposure. To address this, the firm NowSecure has introduced a privacy-focused platform combining static, dynamic, and manual analysis to locate and mitigate leaks in both first and third party code.  https://betanews.com/2025/09/29/mobile-apps-expose-sensitive-data-and-create-privacy-risks/

Building a Lasting Security Culture at Microsoft

Microsoft explains how it is embedding a “security-first” mindset across its entire workforce through its Secure Future Initiative. The company revamped its training programs—creating personalized, role-specific content tackling advanced threats like AI and deepfakes—and now requires all employees to complete regular, meaningful security education. Leadership at the top, including CEO and CPO, publicly prioritizes security, ties it into performance reviews and compensation, and holds managers accountable. Security is also being integrated into engineering practices via DevSecOps, shift-left methods, and embedding Deputy CISOs into product divisions. Microsoft emphasizes that culture—not just tools—is key, and that sustained engagement, feedback loops, and grassroots ambassador networks are essential to making security a living part of how people work. https://www.microsoft.com/en-us/security/blog/2025/10/13/building-a-lasting-security-culture-at-microsoft/

Autonomous AI Hacking and the Future of Cybersecurity

The article argues that AI agents are increasingly capable of performing full cyberattacks autonomously—conducting reconnaissance, exploiting systems, and maintaining persistence without human intervention. It cites recent demonstrations where AI tools found hundreds of vulnerabilities, chain exploits, and even automated extortion operations. The author warns this trend may upend existing defensive strategies, as attacks could outpace human response. On the defensive side, AI could also empower security teams: vulnerability research may become automated, patching integrated into pipelines, and networks may evolve toward self-healing. The core message: we are approaching a shift where AI changes the balance between attackers and defenders—not just by accelerating old techniques, but by transforming how cyber operations are done.  https://www.schneier.com/blog/archives/2025/10/autonomous-ai-hacking-and-the-future-of-cybersecurity.html

Announcing Google’s New AI Vulnerability Reward Program

Google is launching a new AI Vulnerability Reward Program (AI VRP) to incentivize the discovery of security flaws in its AI systems. The program expands Google’s existing vulnerability rewards framework to include models, APIs, and services such as Bard, Vertex AI, and other generative AI tools. Researchers who find and responsibly report issues like prompt injection, model theft, data leakage, unauthorized access, and adversarial manipulation may receive bounties. Google also outlines guidelines, scope boundaries, and eligibility rules for submissions, promoting transparency, safety, and collaborative hardening of its AI infrastructure.  https://bughunters.google.com/blog/6116887259840512/announcing-google-s-new-ai-vulnerability-reward-program

Memory Integrity Enforcement: a new era of always-on memory safety for Apple devices

Apple describes the rollout of Memory Integrity Enforcement (MIE) — a deeply integrated hardware-software system combining secure typed memory allocators and Enhanced Memory Tagging Extension (EMTE) in synchronous mode, along with tag confidentiality enforcement — to provide always-on defenses against memory corruption vulnerabilities across critical subsystems (including kernel and userland). The effort spans half a decade of design and collaboration between Apple silicon and OS teams, aiming to block classes of exploits like buffer overflows and use-after-free before they can be chained. In Apple’s evaluation, MIE significantly constrains attacker options and disrupts many contemporary exploit techniques, marking what they call “the most significant upgrade to memory safety in the history of consumer operating systems.”  https://security.apple.com/blog/memory-integrity-enforcement/

Keeping Secrets Out of Logs

The author argues there is no silver bullet to prevent secret leaks in logs, but proposes a number of “lead bullet” controls applied in depth. They first catalog common causes of secret leakage—direct logging, “kitchen sink” objects, configuration changes, embedded secrets, telemetry platforms, and user input. Then they present ten mitigation strategies: designing data architecture to centralize logging, transforming/redacting/tokenizing data, introducing domain primitives that distinguish secrets, using read-once objects, customizing log formatters, reinforcing via unit tests, employing sensitive data scanners, preprocessing log streams, applying taint analysis, and empowering people via training and incentives. The recommended overarching strategy is: lay a foundation (culture, definitions, pipeline), understand secret data flows, protect chokepoints, use defense-in-depth, and plan for detection, response, and recovery.  https://allan.reyes.sh/posts/keeping-secrets-out-of-logs/

Nine HTTP Edge Cases Every API Developer Should Understand

The article describes subtle yet dangerous HTTP behaviors that often catch API developers off guard. It covers issues such as complex Range header parsing that can overload servers (as in a Rack vulnerability), inconsistent Content-Type enforcement across frameworks, malformed Accept header negotiation, missing “Allow” headers in 405 responses, compression applied at unexpected layers, character encoding mismatches corrupting data, path traversal flaws, unbounded request sizes leading to DoS, and request smuggling via conflicting Transfer-Encoding and Content-Length headers. It also highlights differences introduced by HTTP/2 and HTTP/3 and argues that while frameworks handle much of HTTP correctly, developers still need to know which edge cases remain their responsibility to defend against.  https://blog.dochia.dev/blog/http_edge_cases/

Supply-Chain Attacks Are Exploiting Our Assumptions

The article argues that modern software development relies on a set of implicit trust assumptions—about package origin, maintainer integrity, build provenance, and distribution chains—that attackers are increasingly undermining. It reviews recent attack vectors such as typosquatting, credential theft, build pipeline poisoning, and malicious maintainers gaining control. To counter these threats, defenders are developing tools and practices like typo-resistance checks (TypoGard/Typomania), static workflow analyzers (Zizmor), trusted publishing with attestations (e.g. in PyPI), Homebrew build provenance, and capability analysis (Go Capslock). The author calls on ecosystems and developers to shift from implicit trust to explicit, verifiable assurances across the software supply chain.  https://blog.trailofbits.com/2025/09/24/supply-chain-attacks-are-exploiting-our-assumptions/

Vibe Coding: A Pentester’s Dream

The article explores “vibe coding,” a style of software development where AI (via chat-interfaces in IDEs) generates code based on prompts with minimal human oversight. The NetSPI team built a vibe-coded web application (a dental services app) and then assessed its security via AI audits and manual penetration testing. They found that while AI could flag and remediate some vulnerabilities (e.g. password hashing, injection protections), it frequently introduced or overlooked serious issues—especially in authorization logic, business rules, and fine-grained access control (e.g. IDOR and role-based flaws). The piece concludes that as AI coding becomes more common, organizations must remain proactive with rigorous testing, especially of authorization, and not blindly trust AI defaults. https://www.netspi.com/blog/executive-blog/web-application-pentesting/vibe-coding-a-pentesters-dream/

Seqra — security-focused static analyzer for Java

The Seqra project is a security-oriented static analysis tool built in Go that combines the data-flow and cross-module strengths of CodeQL with the rule-writing simplicity of Semgrep. It outputs results in the standard SARIF format for CI/CD integration, can run scans on Java projects, and is free to use under the MIT License (with parts under a functional source license). The core engine is source-available, with conditions, and Seqra emphasizes seamless adoption via CLI, GitHub Actions, and integration into developer tooling.  https://github.com/seqra/seqra

npm registry exploited in credential-phishing campaign via malicious packages

Researchers discovered 175 malicious npm packages that in total were downloaded around 26,000 times. These packages don’t execute malware on install; instead, they host HTML/JavaScript redirect scripts via npm’s registry and the unpkg CDN to funnel users towards credential-harvesting phishing pages. The campaign, dubbed “Beamglea,” targeted over 135 organizations across industrial, technology, and energy sectors. Attackers automated the creation of packages and phishing infrastructure, embedding victim emails into redirects to increase legitimacy. The abuse of trusted infrastructure without traditional malware underlines how threat actors are evolving to exploit software ecosystems. https://thehackernews.com/2025/10/175-malicious-npm-packages-with-26000.html

When Diagramming Truly Adds Value in Security Design

The article argues that diagramming should not be a mandatory ritual in every design review but a deliberate choice when it brings clarity or alignment. Diagrams are most useful for complex systems where they expose assumptions, make architecture explicit, and help visualize attack surfaces. However, in simple or well-understood designs, they may add little. With LLMs aiding in diagram creation and analysis, teams can focus on when diagrams genuinely improve understanding. The key is using them iteratively, purposefully, and without rigidity. https://boringappsec.substack.com/p/edition-31-the-role-of-diagramming

Pull Request Nightmare: RCE via misconfigured pull_request_target workflows

Orca Research shows that misconfigured GitHub Actions using pull_request_target can be abused by untrusted pull requests to achieve remote code execution, exfiltrate secrets, and enable supply-chain compromises; the researchers detail attack techniques, real-world impacts observed across large organizations, and concrete mitigations such as avoiding privileged workflows for untrusted PRs, gating or validating inputs, restricting Actions and runners, and enforcing least-privilege workflow design.   https://orca.security/resources/blog/pull-request-nightmare-github-actions-rce https://orca.security/resources/blog/pull-request-nightmare-part-2-exploits

Proofs-of-Concept for Release Tampering via GitHub Actions

This GitHub repository contains PoCs (proofs of concept) demonstrating how a malicious maintainer—one who already has commit or maintainer access—can stealthily tamper with software releases built via GitHub Actions workflows. The repository was presented at fwd:cloudsec Europe 2025. The content begins by defining the threat model: a maintainer who wants to hide malicious changes in release artifacts without altering the source code. It then walks through multiple attack paths across the SLSA pipeline stages (Source, Build, Distribution). The first path exploits the fact that GitHub Releases are mutable by default, so a maintainer can alter assets after publishing. Another path uses a typosquatted third-party GitHub Action to insert malicious behavior during the build. Other variants include abusing controlled runners (hosted or self-hosted), manipulating checkout behavior, or using orphan commits to erase traces. For each attack path, the repository includes OPSEC considerations (wh...

IMDS Abused: Hunting Rare Behaviors to Uncover Exploits

This blog post describes how attackers increasingly abuse the cloud Instance Metadata Service (IMDS) to steal credentials, move laterally, and escalate privileges. It explains that IMDS allows cloud instances to retrieve temporary credentials securely, but weaker versions (IMDSv1) are vulnerable to Server-Side Request Forgery (SSRF) attacks, so enforcing IMDSv2 is important. The authors present a data-driven threat-hunting methodology: establish a baseline of normal IMDS usage, detect processes that rarely but unusually access IMDS, focus on sensitive metadata paths, and use contextual signals to prioritize threats. Using this approach, they uncovered a zero-day SSRF vulnerability in Pandoc (CVE-2025-51591), exploited via embedded iframes pointing to IMDS, and another SSRF abuse in ClickHouse via vulnerable URL functions. They emphasize proactive prevention and real-time detection: enforcing IMDSv2, least privilege roles, and continuously monitoring anomalous IMDS behavior, as embodied...

Post-Quantum Cryptography Conference 2025 — Kuala Lumpur

The PKI Consortium will host its fourth Post-Quantum Cryptography (PQC) Conference from October 28 to 30, 2025 in Kuala Lumpur (and online) at the Connexion Conference & Event Centre. The event includes hands-on workshops, expert talks, panels, and breakout sessions, all focused on preparing for the transition to quantum-resistant cryptographic systems. Registration is free, though attendees are responsible for their own travel and lodging. Speakers will include leading figures in cryptography, PKI, and quantum security, and content is structured to balance strategy, technical depth, and education across the three days. https://pkic.org/events/2025/pqc-conference-kuala-lumpur-my/

Qinsight — Enterprise Cryptographic Posture Management

Qinsight is a SaaS platform focused on giving organizations visibility into their cryptographic assets across TLS, SSH, certificates, and encryption protocols. It helps assess and score cryptographic risk, flag vulnerabilities (including quantum-vulnerable algorithms), and provides guidance for remediation. The platform is designed to aid compliance, prepare for post-quantum cryptographic transitions, and reduce blind spots in how encryption is used across enterprise systems. https://www.qinsight.com/

The 2025 State of Security Champions Report

The report from Katilyst combines original survey data from 33 organizations with external benchmarks (like BSIMM15) to provide a real-world view of how security champion programs currently operate. It shows that most programs are under four years old, reveals how older programs expand their scope (from secure coding toward governance and threat modeling), and demonstrates a correlation between champion adoption and program maturity: top-tier firms tend to more fully integrate champion initiatives across departments. The report is intended as a benchmarking tool and a guide for scaling security culture effectively.  https://www.katilyst.com/state-of-security-champions-report-2025

Two-Thirds of Organizations Report Cybersecurity Roles Going Unfilled

The article highlights a pervasive talent shortage in cybersecurity, noting that 65 percent of organizations currently have open cybersecurity positions they cannot staff. It explores contributing factors such as skill mismatches, recruitment challenges, and structural barriers, and argues that addressing the gap will require changes in hiring practices, training pipelines, and industry expectations.  https://www.infosecurity-magazine.com/news/two-thirds-unfilled-cybersecurity/

Responding to the Shai-Hulud Attack Aftermath

The article from Defendermate describes how their team is launching a freely accessible, continuously updated list of npm packages affected by the Shai-Hulud incident. They explain that even though the initial spread of malicious code may have been contained, many security teams are still grappling with residual risks and hidden dependencies. Defendermate positions this curated resource as a way to aid organizations in assessing exposure, prioritizing remediation, and staying ahead of potential downstream impacts from the attack.  https://defendermate.com/whatsnew/shai-hulud