Despite these concerns, Dr Green acknowledged the potential benefits of AI in easing administrative work and enabling more frequent revisiting of care plans. Other AI technologies already in use in the health and care sector include PainChek, a phone app that uses AI-trained facial recognition to identify pain, and Oxevision, a system used by NHS mental health trusts to monitor patient activity. However, care managers fear that using AI tech could lead to inadvertent rule-breaking and loss of their licence. A meeting of social care organisations has been convened to discuss responsible use of generative AI, with the aim of creating a good practice guide within six months.
Key takeaways:
- Researchers at the University of Oxford have raised concerns about the use of unregulated AI bots in social care, citing potential risks to patient confidentiality and the possibility of carers acting on faulty or biased information.
- Despite these concerns, the researchers also acknowledge potential benefits of AI in social care, such as reducing administrative work and allowing for more frequent revisiting of care plans.
- Several AI-based technologies are already being used in health and care sectors, including PainChek, Oxevision, and Sentai, which use AI for tasks like identifying pain through facial recognition, monitoring patient activity, and reminding patients to take medication.
- A meeting of 30 social care organisations was convened at Reuben College to discuss responsible use of generative AI in social care, with the intention of creating a good practice guide within six months and working with the Care Quality Commission and the Department for Health and Social Care to develop enforceable guidelines.