The study found that identity confusion primarily results from hallucinations rather than model reuse or plagiarism. A structured survey conducted via Credamo indicated that identity confusion significantly undermines user trust, especially in critical areas like education and professional use, more so than logical errors or inconsistencies. Users attributed these trust issues to design flaws, incorrect training data, and perceived plagiarism, highlighting the systemic risks posed by identity confusion to the reliability and trustworthiness of LLMs.
Key takeaways:
```html
- Large Language Models (LLMs) are widely used across various domains but face issues of originality and trustworthiness.
- Identity confusion in LLMs, where models misrepresent their origins, is a significant concern, affecting 25.93% of the models studied.
- Identity confusion primarily arises from hallucinations rather than model reuse or plagiarism.
- This confusion significantly undermines user trust, especially in critical areas like education and professional use.