Automated Reasoning checks are particularly useful for use cases where factual accuracy and explainability are important. Users can create Automated Reasoning policies that encode their organization’s rules, procedures, and guidelines into a structured, mathematical format. These policies can then be used to verify that the content generated by LLM-powered applications is consistent with the guidelines. The new Automated Reasoning checks safeguard is currently available in preview in Amazon Bedrock Guardrails in the US West (Oregon) AWS Region.
Key takeaways:
- Amazon is adding Automated Reasoning checks to its Amazon Bedrock Guardrails, a safeguard for generative AI applications, to help validate the accuracy of responses generated by large language models and prevent factual errors.
- Automated Reasoning checks use mathematical, logic-based algorithmic verification and reasoning processes to verify the information generated by a model, ensuring outputs align with known facts and aren’t based on fabricated or inconsistent data.
- Users can create Automated Reasoning policies that encode their organization’s rules, procedures, and guidelines into a structured, mathematical format, which can then be used to verify that the content generated by their applications is consistent with their guidelines.
- The new Automated Reasoning checks safeguard is currently available in preview in Amazon Bedrock Guardrails in the US West (Oregon) AWS Region.