colourful  heart image

Developing international policy standards for ‘automated empathy’ AI

We may marvel at the capacity of AI to create things for us, but we are still in the early days of the technology.

Current general-purpose artificial intelligence products are often marketed as ‘empathic partners’, ‘personal AI’, ‘co-pilots’, ‘assistants’, and related phrasing for ‘human-AI partnering’. Here, ‘emulated empathy’ is used within artificial intelligence systems for human-AI partnerships from work settings, for therapy, education, life coaching, legal problems, to fitness, entertainment, and more.

These systems raise ethical questions that are global in nature yet benefit from exploration of ethical thought from around the world. Some ethical questions such as transparency, accountability, bias and fairness are familiar, but others are specific and unique, including psychological interactions and dependencies, child appropriateness, fiduciary issues, animism, and manipulation through partnerships with general-purpose artificial intelligence systems.
Andrew McStay,  Professor of Technology & Society at Bangor University

The research is funded by UKRI through Responsible AI UK.

The resulting IEEE (Institute of Electrical and Electronics Engineers) standard will define ethical considerations, detail good practices, and augment and complement international human rights and regional law.

Professor McStay added,

“We believe that the use of human-state measurement to engage with qualitative dimensions of human life is still in its infancy. Emotional AI, and wider automated human-state measuring, and use of large language models to gauge theory of mind, requires ongoing social, cultural, legal and ethical scrutiny.”

It looks like you’re visiting from outside the UK, would you like to be redirected to the international page?