Betting Against the Models: Rethinking AI Security Strategies
In his recent article, Shrivu Shankar critiques the emerging cybersecurity market focused on "Security for AI" startups, arguing that many are built on a flawed premise: betting against the rapid evolution of foundational AI models. He identifies two main predictions that he believes are misguided
The first is the notion that companies can build durable businesses by patching the current, transient weaknesses of foundational models. Shankar points out that defense is highly centralized around a few foundational model providers, and third-party tools will face an unwinnable battle against a constantly moving baseline, leading to a rising tide of false positives. He suggests that the market for patching model flaws is a short-term arbitrage opportunity, not a long-term investment.
The second prediction is that AI agents can be governed with the same restrictive principles used for traditional software. Shankar argues that an agent's utility is directly proportional to the context it's given; a heavily restricted agent is a useless agent. He believes that attempting to manually define granular, policy-based guardrails for every possible context is an unwinnable battle against complexity
Shankar concludes that the startups that will thrive are those that stop betting against the models and start building solutions for the durable, contextual challenges of our rapidly approaching agentic future.
Comments
Post a Comment