Vera's platform identifies risks in model inputs and can block or transform requests that might contain sensitive information. It also places constraints on what models can "say" in response to prompts, giving companies greater control over their models' behavior. However, the article notes that content moderation models are prone to biases and questions the reliability of Vera's approach. Despite potential shortcomings and competition from other tech companies, Vera claims to tackle a range of generative AI threats and already has a handful of customers.
Key takeaways:
- Liz O’Sullivan, a member of the National AI Advisory Committee, co-founded Vera, a startup that is developing a toolkit for companies to establish and enforce acceptable use policies for generative AI models.
- Vera recently closed a $2.7 million funding round, bringing its total raised to $3.3 million. The funds will be used for team expansion, R&D, and scaling enterprise deployments.
- Vera's platform identifies risks in model inputs and can block, redact, or transform requests that might contain sensitive information. It also places constraints on what models can "say" in response to prompts, giving companies greater control over their models' behavior.
- Despite potential shortcomings and competition from companies like Nvidia, Salesforce, and Microsoft, Vera's comprehensive approach to tackling a range of generative AI threats could make it attractive to companies seeking a one-stop solution for content moderation and AI-model-attack defense.