Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Vera wants to use AI to cull generative models' worst behaviors | TechCrunch

Oct 05, 2023 - news.bensbites.co
Liz O'Sullivan, a member of the National AI Advisory Committee, has co-founded a startup called Vera that aims to make generative AI safer. Vera has developed a toolkit that allows companies to establish "acceptable use policies" for generative AI models and enforce these policies across open source and custom models. The company recently raised $2.7 million in a funding round, which will be used to expand its team and scale enterprise deployments.

Vera's platform identifies risks in model inputs and can block or transform requests that might contain sensitive information. It also places constraints on what models can "say" in response to prompts, giving companies greater control over their models' behavior. However, the article notes that content moderation models are prone to biases and questions the reliability of Vera's approach. Despite potential shortcomings and competition from other tech companies, Vera claims to tackle a range of generative AI threats and already has a handful of customers.

Key takeaways:

  • Liz O’Sullivan, a member of the National AI Advisory Committee, co-founded Vera, a startup that is developing a toolkit for companies to establish and enforce acceptable use policies for generative AI models.
  • Vera recently closed a $2.7 million funding round, bringing its total raised to $3.3 million. The funds will be used for team expansion, R&D, and scaling enterprise deployments.
  • Vera's platform identifies risks in model inputs and can block, redact, or transform requests that might contain sensitive information. It also places constraints on what models can "say" in response to prompts, giving companies greater control over their models' behavior.
  • Despite potential shortcomings and competition from companies like Nvidia, Salesforce, and Microsoft, Vera's comprehensive approach to tackling a range of generative AI threats could make it attractive to companies seeking a one-stop solution for content moderation and AI-model-attack defense.
View Full Article

Comments (0)

Be the first to comment!