The article also recommends setting personal safety benchmarks in the absence of official regulations. This could include creating a roadmap with milestones that demonstrate trustworthiness, such as establishing a quality assurance framework, achieving a certain level of encryption, or conducting a specific number of tests. The journey towards these benchmarks should be published to build credibility and trust with potential customers.
Key takeaways:
- Large language models (LLMs) are transforming industries, but they require a high level of trust due to risks such as fabricated information and data security concerns.
- While there are no specific certifications for data security in generative AI, obtaining adjacent certifications like SOC2 compliance, ISO/IEC 27001, and GDPR can boost credibility.
- AI organizations can contribute to the formation of regulations by collaborating with local politicians and committee members, demonstrating a commitment to safety and ethical practices.
- In the absence of official regulations, AI organizations should set their own safety benchmarks and publicly share their progress to build trust with potential customers.