The article also presents preliminary data showing that DQ does not degrade the overall quality of answers provided by LLMs. It suggests that DQ can be particularly beneficial in fields where hallucinations are problematic, such as healthcare. However, the authors acknowledge that while DQ significantly improves the reliability of LLMs, there is still work to be done before these systems can be considered safe for widespread use in healthcare.
Key takeaways:
- Invetech is developing a technique called "Deterministic Quoting" to ensure that quotations from source material used by Large Language Models (LLMs) are verbatim and not hallucinated, thus increasing their reliability in fields like healthcare.
- Deterministic Quoting works by ensuring that the data displayed on a blue background has never passed through an LLM or any non-deterministic AI model, thus guaranteeing it to be hallucination-free.
- Even with a basic implementation, Deterministic Quoting shows significant improvement over the current state-of-the-art, and future versions can provide further improvements to the quality of answers and flexibility when parsing a wide range of input documentation.
- While Deterministic Quoting is beneficial in healthcare, it can also be applied in other fields such as systems with knowledge of legislation, financial regulation, or works of literature.