Ecological Empathy in the East and West: Insights from AEGIS Workshops, Tokyo, June 2024

– Watch a short video of the Japan workshops.

Konnichiwa! The Project AEGIS team is in downtown Tokyo, kindly hosted by partners at the National Institute of Informatics (NII). We are here to explore regional cultural perspectives and ideas regarding technology at the intersection of emulated empathy and human-AI partnering.

The central activity for this stage of the project is a pair of all-day workshops at NII headquarters with a diverse, multidisciplinary mix of attendees. Here’s an overview of workshops.

AEGIS Japan workshop on empathy and AI ethics

Sumiko Shimo shares cultural insights with workshop attendees.

Workshop Context

There is much to learn from Japan’s relationship with technology. There are of course social and philosophical differences between the so-called Western nations, and Japan and its “ethically-aligned” neighbours (e.g. Taiwan). But more so, Japan has a particularly interesting history, shared attitude, and governance approach regarding computers, robots, and human interaction with technology.

At the risk of massive oversimplification, here are some of the main considerations that we had in mind, going into the start of the workshops:

  1. In Japan there is much greater inerest in, and acceptance of, the role of technology in society.

  2. The experience of Modernity between East and West is markedly different.

  3. The science and business of AI, empathy and related topics are predominantly Western. Empathic AI models, for instance, are informed by psychological models that may be built on data from WEIRD (Western, educated, industrialised, rich, and democratic) researchers and participants.

  4. If there is a key difference between Western vs Eastern societal philosophy and attitudes, it could be individualism vs collectivism, respectively.

For more on this background context, you can dig into earlier work by the Emotional AI Lab based on previous UK-Japan research here (in particular, see Mcstay, A. (2021) Emotional AI, Ethics, and Japanese Spice: Contributing Community, Wholeness, Sincerity, and Heart, Philosophy & Technology.)

Workshop Structure

Both sessions were 9-5 full days, each attended by ~16 guests from academia, tech industry, law & policymaking, civil society organisations, and consumer protection groups. Both days followed the same format but with some updated prompts and content for the second day based on outputs from the first. Our team and selected guests provided contextual talks about related work and societal issues. And in between them we gathered attendee insights from small-group sessions on the following themes:

  1. Use cases: Group discussion on examples of use of these technologies.

  2. Global versus local ethics: Is global governance too Western and what globally can be learned from E. Asia?

  3. Ethical issues: What ethical considerations arise from the use cases we have listed?

  4. That’s great, but how do we apply it?: What guidance and restrictions might be included in an ethical standard for these technologies?

Special thanks to our guest speakers: Frederic Andres, Sumiko Shimo, Amy Kato, Chih-Hsing Ho, Takashi Egawa.

AEGIS Japan workshops on empathy and AI Ethics

Insights

The top line: an ecological approach

If we attempt to sum the workshop outputs into a single connecting theme, we could call this an “ecological” approach. The workshops repeatedly raised a need for both empathic GPAI and their corresponding standards to respect, account for, and adapt to the context of the human affected by the system (e.g. user). There are many dimensions to this “human context” and many ways in which this “ecological” approach can be enacted, some of which are listed in more detail below. And none of this should be surprising. However, our participants illuminated some instructive and challenging concepts that underpin this overarching theme of ecology.

Ecological and aesthetic empathy: As we will discuss in more detail in our forthcoming white paper, the workshops explored ecological empathy, which repositions humans and their emotional experience into a wider frame that includes the properties and “social life” of non-human things, both organic and inorganic. And we were taken a step further into the concept of aesthetic empathy, in which we feel into, and emotionally engage with, characters, artworks and other non-human entities, including AI.

Expanding on the ecological approach, we unearthed two related processes that stood out as connecting with many of the insights below. They highlight key considerations for systems developers or those assessing them:

  1. Need for customisability (e.g. of the system’s empathic functions and communicating expressions), including scope for reducing or removing the system’s expressiveness or empathic features, and;

  2. Need for interoperability (e.g. of the AI systems themselves or the ethical standards concerning them).

General insights

We found consensus in the following points (bolded for scan-reading, not for emphasis):

  1. There are significant cultural differences between the “West” and Japan (and the “East”), and further differences between East Asian countries, and indeed within Japan. This cultural variation can be subtle and sensitive.

  2. Current global governance (e.g. standards) is predominantly Western and lacking sensitivity or adaptability for regional factors.

  3. Empathy requires respect for the subjective world of the other person (e.g. system user). This includes their cultural background, how they view other beings and objects (e.g. applying animism to non-living things), and their place in their community.

  4. There was a challenge to the need for empathy in the first place, when;

    1. Reduced expressiveness or empathic behaviour may be preferable (compare the emotive, Her-like character of Open AI’s GPT 4o with the more robotic tone of Google's Astra – examples here), or;

    2. Other social, linguistic, etc. tools could do the job, such as politeness (an AI, as with a shopkeeper, could be polite without any attempt to empathise).

  5. Comfort and willingness for intimacy and disclosure in interaction differ. This led to some scepticism about the scope for automated meaningful empathy.

  6. To humanise or animise a technological system is much more normal and acceptable in Japan.

  7. Japan has a different tendency towards trust. Broadly, there is greater trust in government, large corporations, and technologies. This trust interacts with high levels of respect, dignity, law-abiding, and the scale of potential fallout when that trust is broken.

    1. This trust may need reconsideration in the face of the increasing market and societal access of Western media, technology and business in Eastern life.

  8. Western approaches to ethical factors such as data privacy & protection, accountability, and transparency, should be adapted to account for Eastern attitudes, such as for a more ecocentric approach that considers wider society and the environment as key stakeholders (in addition to the individual).

  9. General-purpose AIs (GPAIs) trained on Western data corpora can expose Eastern users to taboo subjects, so some level of regional filtering may be needed.

  10. Deception (as a key ethical issue for empathic GPAIs – paper coming soon) is normal to some extent in all regions, and may be more acceptable – desirable, even – in the East in some contexts, such as where politeness or flattery are expected in the human-machine dialogue.

  11. “Are you okay with this? …Are you still okay?” – Empathic GPAI have potential for long-term and evolving impacts. Thus, meaningful consent and other arrangements will need to be updated at reasonable frequencies, on an ongoing basis.

  12. Empathic interaction produces a psychological “energy cost” for the participants. As such, it may be preferable to design the system to communicate in a succinct, mechanistic, robotic fashion, rather than one that is animated and gregarious (and empathic). Excessive use could lead to empathy fatigue.

  13. A great deal of the meaning exchanged in real-world interaction is conveyed through metaphor, figurative language, abstract gestures, and indeed what's not said. This is already a challenge between humans (e.g. of different backgrounds) surely it is missing from our current AI systems.

Attendees of Workshop 1

Standards and empathic GPAI systems

Zooming in now with more specificity on standards development, particularly the emerging body of ethical standards and those covering empathic technologies, the outstanding insights were that:

  1. The typical timing of standards development meetings (biased towards Western working time) makes it very difficult for people in Eastern regions to contribute.

  2. Even if the time zone is accommodating, there are language and social barriers that can make it hard for Japanese people to contribute to the development process. For instance, a strong desire for harmony and respect for politeness can lead to a Japanese person remaining quiet on a working group call and missing their chance to speak.

  3. A good global standard may need to include cultural sensitivity and “cultural explainability”, whereby the system owner should assess and justify how their designs account for cultural differences in the audiences they wish to sell to (e.g. Eastern users).

  4. Greater consideration of, and adaptation to, different cultures in system design may engender trade-offs against increased expenses for research, functionality and so on.

  5. AI literacy of potentially affected stakeholders (e.g. users) should be a priority of systems developers and corresponding standards. This should account for cultural factors, too.

  6. Where interoperability is traditionally a core feature of effective and usable standards, there is a related need for these new ethical standards to establish an ethical (and cultural, social, etc.) “interoperability”. This could manifest in features such as shared taxonomies, universal models, general principles, and so on, but is likely to be limited in scope. 

  7. While the whole planet may never agree on truly universal ethical standards, we should strive for a high minimum threshold of restrictions and good practices, and perhaps then provide scope for contextual (e.g. national) add-ons to adapt these global soft-law mechanisms.

  8. Ethical standards (and other governance mechanisms) may include assessment and oversight requirements – such as human-in-the-loop monitoring, or third-party certification – but these are not equally valued between Eastern and Western cultures, where there are differences in norms regarding trust, respect, dignity and suchlike. Instead, there can be a preference for trusting the system developer to respect laws and standards without the shame and intrusion of an overseer.

Next Steps

There’s a lot to digest here! We have already begun drafting a white paper, which will explore the most salient insights in more detail. And we will feed them into the IEEE P7014.1 working group, which will kick-off at last later this month.


If you’re interested in contributing to the development of the IEEE 7014.1 global standard for ethics in empathic GPAI, please reach out. More on the group’s website here.