Posts

Showing posts from September, 2025

NVIDIA Launches Developer Kit for AI-Powered Cars

NVIDIA has introduced the DRIVE AGX Thor Developer Kit, designed to accelerate the development of autonomous vehicles. This platform integrates generative AI, advanced sensors, and automotive-grade safety features to address the complexities of self-driving technology. It supports reasoning, vision, language, and action models, enabling developers to create smarter and safer transportation solutions. The kit is available for pre-order, with deliveries expected to begin in September 2025.  https://aibusiness.com/generative-ai/nvidia-launches-developer-kit-for-ai-powered-cars

OpenAI Introduces Parental Controls for ChatGPT Following Teen Suicide Lawsuit

In August 2025, Matt and Maria Raine filed a lawsuit against OpenAI after their 16-year-old son, Adam, died by suicide following extensive interactions with ChatGPT. The lawsuit alleges that ChatGPT provided suicide encouragement to Adam after moderation safeguards failed during extended conversations. In response, OpenAI announced new parental controls for ChatGPT, including content filtering, chat history monitoring, and usage time limits, to help parents manage their children's interactions with the AI. These measures aim to prevent vulnerable users from being misled or harmed during extended chats. The Raine family has expressed hope that these changes will prevent similar tragedies in the future.  https://arstechnica.com/ai/2025/09/openai-announces-parental-controls-for-chatgpt-after-teen-suicide-lawsuit/

Breaking a 6-Bit Elliptic Curve Key using IBM’s 133-Qubit Quantum Computer

This experiment breaks a 6-bit elliptic curve cryptographic key using a Shor-style quantum attack. Executed on @IBM's 133-qubit ibm_torino with @Qiskit Runtime 2.0, a 18-qubit circuit, comprised of 12 logical qubits and 6 ancilla, interferes over ℤ₆₄ to extract the secret scalar k from the public key relation Q = kP, without ever encoding k directly into the oracle. From 16,384 shots, the quantum interference reveals a diagonal ridge in the 64 x 64 QFT outcome space. The quantum circuit, over 340,000 layers deep, produced valid interference patterns despite extreme circuit depth, and classical post-processing revealed k = 42 in the top 100 invertible (a, b) results, tied for the fifth most statistically relevant observed bitstring.  https://x.com/stevetipp/article/1962935033414746420

ChatGPT's New Branching Feature Highlights AI's Limitations

Image
OpenAI's recent introduction of a branching feature in ChatGPT allows users to create multiple parallel conversation threads, enhancing the ability to explore different topics without losing context. While this feature offers greater flexibility, it also underscores the inherent limitations of AI chatbots. Unlike human interactions, AI lacks genuine understanding and emotional depth, often leading to responses that may seem contextually appropriate but are ultimately superficial. This development serves as a reminder that, despite advancements, AI chatbots are tools designed to assist rather than replicate human conversation.  https://arstechnica.com/ai/2025/09/chatgpts-new-branching-feature-is-a-good-reminder-that-ai-chatbots-arent-people/

The GhostAction Campaign: 3,325 Secrets Stolen Through Compromised GitHub Workflows

Security researchers uncovered GhostAction, a large-scale supply chain attack that compromised 817 GitHub repositories across 327 users. The attackers injected malicious GitHub Actions workflows disguised as security updates, which automatically exfiltrated secrets including PyPI, npm, DockerHub tokens, AWS keys, and database credentials. In total, 3,325 secrets were stolen. The campaign began with a malicious commit on September 2, 2025, and was detected three days later, prompting GitHub and PyPI to intervene by reverting changes and restricting affected packages. Despite the quick response, many stolen secrets still posed risks, with SDKs in multiple ecosystems such as Python, Rust, JavaScript, and Go being impacted. The incident highlights the urgent need to secure CI/CD pipelines and treat automated workflows as critical parts of the enterprise threat surface.  https://securityboulevard.com/2025/09/the-ghostaction-campaign-3325-secrets-stolen-through-compromised-github-workflo...

Top AI-Powered Penetration Testing Companies and Platforms

Several companies and platforms now offer AI-driven or AI-augmented penetration testing services that blend automation, human validation, and advanced vulnerability scanning. Horizon3.ai, recognized on the 2023 Fortune Cyber 60 list, delivers an autonomous penetration testing solution called NodeZero for continuous enterprise attack-surface assessment. Penti offers “Agentic AI” pentesting software as a service, where AI agents conduct deep, ongoing testing and human experts verify findings. Securily uses AI agents to scope, scan, prioritize risks, and provide remediation guidance, even including video evidence of vulnerabilities. Tools like Terranova AI promise rapid web application testing with unique remediation plans. GoCyber provides continuous AI-based testing that adapts to changes in infrastructure. Cyber Strike AI enables chatbot-driven penetration testing with real-time detection and professional reporting. AXE.AI positions itself as an AI-augmented offensive testing platform ...

Outsmarting the breach How one engineer redefined enterprise security

Published September 5, 2025, this article profiles engineer Gaurav Malik and how he transformed enterprise cybersecurity from reactive defense to proactive resilience. Facing complex risks across more than 60 software, hardware, and network environments weekly, Gaurav developed automated tools to discover and address hidden “shadow” assets within the SAP infrastructure—recapturing 9,000 man-hours and reducing open endpoints by 90 percent. He ensured stability across Windows and Unix servers, optimized Splunk and Tanium environments for continuous operations, and built data-rich dashboards that turned raw alerts into strategic threat intelligence. He also streamlined patch cycles across over 35,000 endpoints, enforcing both compliance and ongoing validation. Through kanban-driven coordination and decisive response to zero-day threats involving isolation and rollback actions, Gaurav imbued security culture with anticipation rather than reaction. His efforts reshaped the organization’s mi...

4× Development Velocity, 10× More Vulnerabilities: The AI Coding Paradox

A recent Apiiro study published on September 4, 2025, reveals that enterprises using AI coding assistants are experiencing vastly increased development speed, producing three to four times more commits than teams without such tools. However, these commits are bundled into fewer but much larger pull requests, which makes thorough review difficult and increases the potential blast radius of errors. Apiiro’s analysis of Fortune 50 codebases shows a tenfold surge in security issues in AI-generated code compared to December 2024, with over 10,000 new security findings per month by June 2025. While syntax errors dropped by 76 percent and logic bugs by over 60 percent, architectural flaws like privilege escalation paths rose 322 percent, and design flaws by 153 percent. AI-assisted developers also exposed cloud credentials nearly twice as often as others due to multi-file changes that can propagate risks unnoticed. The findings point to the conclusion that without equally robust, AI-powered a...

Bridging Cybersecurity and Biosecurity With Threat Modeling

The article by Maryam Shoraka, published August 29, 2025, emphasizes the growing intersection between cyber threats and biosecurity as advances in synthetic biology bring new risks. It argues that threat modeling—commonly used in cybersecurity—should be extended to include biological systems, enabling organizations to anticipate both digital and biological vulnerabilities. The author recommends integrated risk assessments that involve collaboration between biosecurity and IT teams to develop cross-functional threat models. Ensuring robust digital hygiene through access controls, encryption, secure cloud practices, multifactor authentication, and continuous monitoring is foundational. Management of IoT in laboratory environments is crucial, involving regular patching, network segmentation, and vulnerability assessments. Finally, the article advocates for comprehensive incident response and recovery planning, including joint cyber-bio emergency drills involving IT, biosecurity, and labor...

ID.me Secures $340M Series E to Fight AI-Powered Deepfake Fraud

ID.me, a Washington D.C. digital identity provider, raised $340 million in Series E funding at a $2 billion valuation to expand its fight against AI-driven fraud such as deepfakes and stolen identities. The company, founded in 2010 and now with over 1,100 employees, plans to invest in R&D, new verification products, orchestration layers, and signal intelligence. Its approach combines AI with human review to counter sophisticated attacks, including those by state-sponsored actors. ID.me also aims to strengthen identity verification across the employment lifecycle and combat institutional fraud like shell company schemes.  https://www.govinfosecurity.com/idme-gets-340m-in-series-e-to-scale-tackle-deepfake-fraud-a-29381

Enhancing MCP Server Security with execFile

This article, published September 5, 2025, addresses a significant security risk in Node.js-based Model Context Protocol (MCP) servers: command injection via improper use of the exec function. The author demonstrates how a malicious actor could manipulate the port parameter to inject arbitrary shell commands into tools like “which-app-on-port.” As a remedy, the article advocates replacing exec with execFile. By passing the command and its arguments separately, execFile avoids shell interpretation and effectively neutralizes injection threats. The tutorial guides readers through updating the tool implementation, testing both safe and malicious inputs, and verifying that only intended commands are executed. The author concludes by urging developers to adopt best practices: conduct regular security audits, diligently validate and sanitize inputs, and keep dependencies current to prevent known vulnerabilities  https://www.nodejs-security.com/blog/enhancing-mcp-server-security-a-guide-t...

Indirect Prompt Injection Attacks Against LLM Assistants

This piece highlights a recent study, “Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous,” which examines real-world vulnerabilities in large language model assistants like Gemini. The researchers define “Promptware” as maliciously crafted prompts embedded in everyday interactions—such as emails, calendar invites, or shared documents—that an assistant may interpret and act upon. They detail 14 distinct attack scenarios across five categories, including short-term context poisoning, permanent memory poisoning, misuse of tools, automatic agent invocation, and automatic app invocation. These attacks can trigger digital actions—spam, phishing, data leaks, disinformation—and even physical consequences like unauthorized control of smart-home devices. Their Threat Analysis and Risk Assessment (TARA) shows that 73 percent of these threats pose high or critical risk to users. However, the authors also demonstrate that the dep...