Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

China has a new plan for judging the safety of generative AI—and it’s packed with details

Oct 18, 2023 - technologyreview.com
The National Information Security Standardization Technical Committee (TC260) in China has released a draft document outlining proposed rules for determining if a generative AI model is problematic. The document provides detailed criteria for when a data source should be banned from training generative AI and gives metrics on the number of keywords and sample questions that should be prepared to test a model. The standards are not laws and there are no penalties for non-compliance, but they often feed into future laws or work alongside them. The standards also receive input from experts hired by tech companies, which means they reflect how Chinese tech companies want their products to be regulated.

The proposed standards include rules on training, the scale of moderation, prohibited content, and more sophisticated and subtle censorship. For instance, companies should diversify their training materials and assess their quality. If over 5% of the data from one source is considered "illegal and negative information," it should be blacklisted for future training. Companies should also hire moderators to improve the quality of generated content based on national policies and third-party complaints. The document also asks that AI models not make their moderation or censorship too obvious.

Key takeaways:

  • The National Information Security Standardization Technical Committee (TC260) in China has released a draft document outlining detailed rules for determining whether a generative AI model is problematic. The document provides clear criteria for when a data source should be banned from training generative AI and gives metrics on the number of keywords and sample questions that should be prepared to test a model.
  • The document also clarifies what companies should consider a "safety risk" in AI models, addressing both universal concerns like algorithmic biases and content that's sensitive in the Chinese context. It provides specific rules for training, moderation, prohibited content, and subtle censorship.
  • Although the TC260 standards are not laws and there are no penalties for non-compliance, they often feed into future laws or work alongside them. The standards are shaped by experts hired by tech companies, with companies like Huawei, Alibaba, and Tencent having been heavily influential in past TC260 standards.
  • The Chinese AI safety standards could have a significant impact on the global AI industry, potentially providing technical details for general content moderation or signaling the beginning of new censorship regimes. The standards are open for feedback until October 25.
View Full Article

Comments (0)

Be the first to comment!