Posts

Major AI & Robotics Moves: IBM Opens NYC Hub and Honor’s $10B Humanoid Robot Plan

IBM has launched watsonx AI Labs , a developer-focused innovation hub in Manhattan that connects startups with IBM’s researchers, engineers, and ventures. This center supports the development of “agentic AI” systems tailored for sectors like customer service, supply chain, cybersecurity, and responsible AI. The initiative also brings onboard technology from Seek AI, acquired to power enterprise data agents within the lab, and offers mentorship and access to a $500 million Enterprise AI Venture Fund over the next five years. Concurrently, Chinese smartphone maker Honor has unveiled an ambitious $10 billion AI strategy to evolve from smartphones into a comprehensive AI-device ecosystem. As part of this plan, Honor intends to build its own humanoid robots and collaborate with partners in the robotics space. Its AI-powered system has already helped Unitree Robotics set new running-speed records for humanoids, showcasing how Honor's investment is fueling innovation in embodied AI.  htt...

CAI: Comprehensive Open-Source Framework for AI Safety Testing in Robotics

CAI is an open-source toolkit developed by Alias Robotics for analyzing and testing the safety of robotic systems powered by artificial intelligence. It offers a modular architecture to simulate and evaluate AI behaviors in robotics environments, emphasizing risk detection and automated verification. With customizable scenarios, runtime monitors, and integration plugins, CAI enables developers to assess robot decision-making under diverse, potentially hazardous conditions. The framework supports both offline simulations and real-time operation, facilitating proactive identification of unsafe states, control anomalies, or unintended actions. By equipping robotics teams with automated testing and assessment capabilities, CAI promotes stronger safety assurance practices within the AI robotics development lifecycle.  https://github.com/aliasrobotics/cai

GitHub Elevates Code Provenance to Defend Against Supply Chain Attacks

In a recent discussion at Gartner’s Security & Risk Management Summit, GitHub’s Jennifer Schelkopf highlighted the growing hazard of software supply chain attacks—an issue forecasted to impact nearly half of all organizations by year’s end—as threat actors increasingly target popular open‑source components. She explained that inspecting the origin of code artifacts can significantly disrupt such attacks by eliminating implicit trust in builds. Schelkopf emphasized the use of the Supply-chain Levels for Software Artifacts (SLSA) framework, which provides structured integrity controls through artifact attestation—detailing where, how, and by whom code was built. She pointed to Sigstore and Kubernetes’ OPA Gatekeeper as key tools that automate signing and verification within CI/CD pipelines, ensuring any tampering is caught before deployment. Provenance and attestation shift software development from a trust-based model to a trust-verified one. According to Schelkopf, rigged builds—...

Critical SQL Injection in LlamaIndex (CVE-2025-1793): Exposing LLM‑Driven Backdoor Risks

LlamaIndex, a popular framework for connecting large language models to vector stores, was found to contain a critical SQL injection vulnerability, CVE-2025-1793. This flaw stemmed from unsanitized inputs flowing from LLM-generated prompts into database queries via methods like vector_store.delete() . In a typical scenario, a user’s natural language request could be transformed by the LLM into a malicious SQL command—such as "project:X' OR 1=1 --" —leading to unauthorized data deletion, exposure, or manipulation. The vulnerability affects multiple vector store integrations (ClickHouse, Couchbase, DeepLake, Jaguar, Lantern, Nile, OracleDB, SingleStoreDB) and has been addressed in LlamaIndex version 0.12.28. Patches include input sanitization, though rigor varies across database types. The advisory highlights a broader risk: when LLMs encode backend operations without proper sanitization, they can create hidden attack vectors. Developers are urged to apply the patch and imp...

CSA Playbook Empowers Continuous Red‑Teaming of Agentic AI Systems

The Cloud Security Alliance has released a comprehensive guide designed to help security professionals and AI engineers rigorously test autonomous AI agents deployed in sensitive environments. Unlike traditional generative models, agentic AI systems autonomously plan, decide, and act in real-world or virtual contexts, creating fresh attack surfaces in areas such as orchestration logic, persistent memory, and control flows. The guide identifies twelve specific threat categories—including permission hijacking, oversight bypass, goal manipulation, memory poisoning, multi-agent collusion, and source obfuscation—and offers structured test scenarios, red‑team objectives, evaluation metrics, and mitigation approaches for each. It builds on frameworks like CSA’s MAESTRO and OWASP’s AI Exchange, and recommends both open‑source and commercial tools, emphasizing that red‑teaming must be an ongoing, integrated practice throughout the AI development lifecycle.  https://campustechnology.com/arti...

Public Sector Software Vulnerabilities Persist, Widening Security Gap

Applications developed by public sector organizations suffer from significantly more long-standing security flaws than those in the private sector, with 59 percent of public-sector apps carrying vulnerabilities older than a year compared to 42 percent industry-wide. These enduring flaws, caused by neglected patching and configuration weaknesses, accumulate as "security debt" over decades. With such persistence, public services remain highly exposed to threats, underscoring the urgent need for targeted investment, prioritization of secure-by-default practices, and policy support to bring public-sector software up to the security standards commonly found in the private sector.  https://www.helpnetsecurity.com/2025/06/13/public-sector-software-vulnerabilities/

Azul Enhances Java Security with Precision Runtime Vulnerability Detection

Azul’s Intelligence Cloud now includes a runtime vulnerability detection feature that analyzes class-level execution data to identify actual usage of vulnerable code within Java applications. This method significantly reduces false positives—by up to 99%—compared to traditional tools that flag entire components based solely on SBOM or file presence. Leveraging AI-updated knowledge of CVEs mapped to specific Java classes, the system continuously monitors both current and historical runtime behavior, allowing DevOps teams to efficiently triage and prioritize real security risks with no performance impact. The update empowers organizations to reclaim valuable development time, focus on true threats, and enhance their overall security posture.  https://securitybrief.co.nz/story/azul-boosts-java-security-with-improved-runtime-vulnerability-detection