AI Localisation and Extreme Diversity: Insights from AEGIS Workshop, Jakarta, Jan 2025
Insights from a cross-disciplinary workshop in Jakarta, Indonesia, to explore Indonesian perspective on AI companions and emulated empathy in human-AI partnerships.
Exploring Indonesian perspectives on automated empathy in Jakarta, Java, Indonesia
Indonesia is a world within a country! “One of the most diverse countries in the world”, it is an enormous source of both opportunity and complexity. The AEGIS team is in Jakarta, capital of Indonesia, to hear about this country’s extraordinary position in the emerging paradigm of human-AI partnering and automated empathy.
Watch the videos
Watch some highlights from the AEGIS workshop in downtown Jakarta.
A brief highlight from our interview with Arif Perdana from Monash University, Indonesia. Arif talks about the uniqueness of Indonesian culture in the context of AI partnerships.
Workshop context
The workshop was a collaboration with Action Lab Indonesia and the Data and Democracy Research Hub at Monash University, Indonesia, led by Arif Perdana, and held at the JS Luwansa Hotel and Convention Center in downtown Jakarta. It was attended by 22 people, bringing expertise from academia, tech industry, government, journalism and civil society organisations. The theme was “to explore ethics of emulated empathy in human-AI partnerships”, particularly examining how the forthcoming IEEE 7014.1 recommended practice could be informed by Indonesian perspectives.
Hearing participant input on the workshop issues
Besides the AEGIS team, we were treated to rich presentations from:
Ayu Purwarianti, from Institut Teknologi Bandung (on Technical challenges in developing culturally aware empathic systems).
Sinta Dewi, from Universitas Padjadjaran, Dr. Ir. Lukas, from Unika Atma Jaya, NLP Communities/AI Society, and Fahrizal Lukman Budiono, from Ministry of Communication & Digital Affairs (Komdigi) (on The balance between innovation and regulation in empathic AI).
Derry Wijaya, from Monash University, Indonesia (on Cross-cultural challenges in interpreting and responding to emotions)
Irvan Bastian Arief, from tiket.com, and Harun Mahbub Billah, from Liputan6.com (on Future trends and potential developments in commercial emotive).
We were treated to excellent speakers
Workshop insights
The top line: Indonesia demands hyperlocalised AI
While being a single country of 280 million people, Indonesia is strewn across 17,500 islands in a chain stretching over 5,000 kilometres. Over 600 ethnic groups call Indonesia home, conversing in over 700 languages, plus many more dialects. While Islam is the leading religion, it is one of six officially recognised religions, alongside hundreds of indigenous spiritualities. This is a country as diverse as a continent. Cultural perspectives and norms, and their consequent sensitivities, vary dramatically across several vectors relating to issues such as faith, social hierarchy and gender identity. Indonesia is, in a word, diverse.
For any technology as customisable and personalisable as AI, such a sweeping mix of cultures is a major challenge. Regulation and design could restrict AI from offending some sensitivities, as is already the case (e.g. cursing, hate speech) but at what point do such limitations impede reasonable access and functionality? The alternative route is one of variation and adaptation to accommodate the needs of different groups and individuals, or by use case (e.g. medical, education) but this would require sufficient customisation of model training, user access and other features. This would not be simple, especially with the leading AIs already being trained largely on western data, with poor representation of eastern nuance, let alone small cultural groups with low digital access or literacy.
Our Indonesian participants described their fellow nationals as exceptionally keen to innovate, and to adopt new tools and practices. With usage of AI companions and ghostbots being relatively high, “We love technology,” one said. “We embrace it. But we are not cautious.” Overconfidence is apparent in a global survey, the IPSOS AI Monitor 2024, which reported that Indonesians topped the world in their confidence of a “good understanding of what artificial intelligence is” while simultaneously ranking near the top for trusting that “companies that use artificial intelligence will protect my personal data” and further trusting “artificial intelligence to not discriminate or show bias towards any group of people”. Our expert participants painted a picture of Indonesia as early adopters, wired for change, yet vulnerable to misaligned influences from powerful new technologies.
General Insights
Digital and AI literacy is generally low, despite the enthusiastic adoption of new technologies mentioned above. This literacy issue may extend to tech policymakers as well.
Anthropomorphism was raised frequently in our conversations in connection to eastern animism as well as the above-mentioned potential for vulnerability to emotional manipulation as AI becomes more empathic. Opportunity should come from sensitisation to values-based design, where subjectivity is present in objects.
The old trade-off between regulation and innovation deserves particular attention in the Indonesian context. Questions were raised of strong versus weak regulation, to facilitate the exceptional growth that is desired in the country while doing so safely and sustainably (not unlike UK plans for economic and AI growth).
– Note: For more on Indonesia’s national tech objectives, see Visi Indonesia Digital 2045.Indonesia has a precedent of adopting international policies when they are suitable, rather than trying to write new content. For instance, GDPR has greatly influenced Indonesia’s Personal Data Protection Bill (2022). So for AI, there is likely a readiness to onboard good policy and standards from outside the country (if available), with IEEE P7014.1 receiving explicit mention.
Just a few days ago, Indonesia joined the BRICS countries as a full member. This raises questions of how the country will adopt and develop tech policy, innovation and entrepreneurship.
Presentation content ranged between technical and social
The bottom line: what should be done?
These workshop insights point to an urgent need for both technology regulators and producers to accommodate the nuanced needs of cultural and regional groups that exist across this sprawling archipelago. How can the global community help to enable this? It seems to follow that we should support decentralisation of AI development , with deep configuration to local contexts, while minimising intrusive personalisation methods. This is easier said than done, but the tenacity is present in Indonesia.
Increasingly intimate and potent human-machine partnerships are encountering an Indonesia full of energy and dynamism, keen for innovation, open to change, but rife with the complexities that come from serving such a diverse population.
Most of the participants with the AEGIS team
Ecological Empathy in the East and West: Insights from AEGIS Workshops, Tokyo, June 2024
Konnichiwa! The Project AEGIS team is in downtown Tokyo, kindly hosted by partners at the National Institute of Informatics (NII). We are here to explore regional cultural perspectives and ideas regarding technology at the intersection of emulated empathy and human-AI partnering.
The central activity for this stage of the project is a pair of all-day workshops at NII headquarters with a diverse, multidisciplinary mix of attendees. Here’s an overview of workshops.
– Watch a short video of the Japan workshops.
Konnichiwa! The Project AEGIS team is in downtown Tokyo, kindly hosted by partners at the National Institute of Informatics (NII). We are here to explore regional cultural perspectives and ideas regarding technology at the intersection of emulated empathy and human-AI partnering.
The central activity for this stage of the project is a pair of all-day workshops at NII headquarters with a diverse, multidisciplinary mix of attendees. Here’s an overview of workshops.
Sumiko Shimo shares cultural insights with workshop attendees.
Workshop Context
There is much to learn from Japan’s relationship with technology. There are of course social and philosophical differences between the so-called Western nations, and Japan and its “ethically-aligned” neighbours (e.g. Taiwan). But more so, Japan has a particularly interesting history, shared attitude, and governance approach regarding computers, robots, and human interaction with technology.
At the risk of massive oversimplification, here are some of the main considerations that we had in mind, going into the start of the workshops:
In Japan there is much greater inerest in, and acceptance of, the role of technology in society.
The experience of Modernity between East and West is markedly different.
The science and business of AI, empathy and related topics are predominantly Western. Empathic AI models, for instance, are informed by psychological models that may be built on data from WEIRD (Western, educated, industrialised, rich, and democratic) researchers and participants.
If there is a key difference between Western vs Eastern societal philosophy and attitudes, it could be individualism vs collectivism, respectively.
For more on this background context, you can dig into earlier work by the Emotional AI Lab based on previous UK-Japan research here (in particular, see Mcstay, A. (2021) Emotional AI, Ethics, and Japanese Spice: Contributing Community, Wholeness, Sincerity, and Heart, Philosophy & Technology.)
Workshop Structure
Both sessions were 9-5 full days, each attended by ~16 guests from academia, tech industry, law & policymaking, civil society organisations, and consumer protection groups. Both days followed the same format but with some updated prompts and content for the second day based on outputs from the first. Our team and selected guests provided contextual talks about related work and societal issues. And in between them we gathered attendee insights from small-group sessions on the following themes:
Use cases: Group discussion on examples of use of these technologies.
Global versus local ethics: Is global governance too Western and what globally can be learned from E. Asia?
Ethical issues: What ethical considerations arise from the use cases we have listed?
That’s great, but how do we apply it?: What guidance and restrictions might be included in an ethical standard for these technologies?
Special thanks to our guest speakers: Frederic Andres, Sumiko Shimo, Amy Kato, Chih-Hsing Ho, Takashi Egawa.
Insights
The top line: an ecological approach
If we attempt to sum the workshop outputs into a single connecting theme, we could call this an “ecological” approach. The workshops repeatedly raised a need for both empathic GPAI and their corresponding standards to respect, account for, and adapt to the context of the human affected by the system (e.g. user). There are many dimensions to this “human context” and many ways in which this “ecological” approach can be enacted, some of which are listed in more detail below. And none of this should be surprising. However, our participants illuminated some instructive and challenging concepts that underpin this overarching theme of ecology.
Ecological and aesthetic empathy: As we will discuss in more detail in our forthcoming white paper, the workshops explored ecological empathy, which repositions humans and their emotional experience into a wider frame that includes the properties and “social life” of non-human things, both organic and inorganic. And we were taken a step further into the concept of aesthetic empathy, in which we feel into, and emotionally engage with, characters, artworks and other non-human entities, including AI.
Expanding on the ecological approach, we unearthed two related processes that stood out as connecting with many of the insights below. They highlight key considerations for systems developers or those assessing them:
Need for customisability (e.g. of the system’s empathic functions and communicating expressions), including scope for reducing or removing the system’s expressiveness or empathic features, and;
Need for interoperability (e.g. of the AI systems themselves or the ethical standards concerning them).
General insights
We found consensus in the following points (bolded for scan-reading, not for emphasis):
There are significant cultural differences between the “West” and Japan (and the “East”), and further differences between East Asian countries, and indeed within Japan. This cultural variation can be subtle and sensitive.
Current global governance (e.g. standards) is predominantly Western and lacking sensitivity or adaptability for regional factors.
Empathy requires respect for the subjective world of the other person (e.g. system user). This includes their cultural background, how they view other beings and objects (e.g. applying animism to non-living things), and their place in their community.
There was a challenge to the need for empathy in the first place, when;
Reduced expressiveness or empathic behaviour may be preferable (compare the emotive, Her-like character of Open AI’s GPT 4o with the more robotic tone of Google's Astra – examples here), or;
Other social, linguistic, etc. tools could do the job, such as politeness (an AI, as with a shopkeeper, could be polite without any attempt to empathise).
Comfort and willingness for intimacy and disclosure in interaction differ. This led to some scepticism about the scope for automated meaningful empathy.
To humanise or animise a technological system is much more normal and acceptable in Japan.
Japan has a different tendency towards trust. Broadly, there is greater trust in government, large corporations, and technologies. This trust interacts with high levels of respect, dignity, law-abiding, and the scale of potential fallout when that trust is broken.
This trust may need reconsideration in the face of the increasing market and societal access of Western media, technology and business in Eastern life.
Western approaches to ethical factors such as data privacy & protection, accountability, and transparency, should be adapted to account for Eastern attitudes, such as for a more ecocentric approach that considers wider society and the environment as key stakeholders (in addition to the individual).
General-purpose AIs (GPAIs) trained on Western data corpora can expose Eastern users to taboo subjects, so some level of regional filtering may be needed.
Deception (as a key ethical issue for empathic GPAIs – paper coming soon) is normal to some extent in all regions, and may be more acceptable – desirable, even – in the East in some contexts, such as where politeness or flattery are expected in the human-machine dialogue.
“Are you okay with this? …Are you still okay?” – Empathic GPAI have potential for long-term and evolving impacts. Thus, meaningful consent and other arrangements will need to be updated at reasonable frequencies, on an ongoing basis.
Empathic interaction produces a psychological “energy cost” for the participants. As such, it may be preferable to design the system to communicate in a succinct, mechanistic, robotic fashion, rather than one that is animated and gregarious (and empathic). Excessive use could lead to empathy fatigue.
A great deal of the meaning exchanged in real-world interaction is conveyed through metaphor, figurative language, abstract gestures, and indeed what's not said. This is already a challenge between humans (e.g. of different backgrounds) surely it is missing from our current AI systems.
Attendees of Workshop 1
Standards and empathic GPAI systems
Zooming in now with more specificity on standards development, particularly the emerging body of ethical standards and those covering empathic technologies, the outstanding insights were that:
The typical timing of standards development meetings (biased towards Western working time) makes it very difficult for people in Eastern regions to contribute.
Even if the time zone is accommodating, there are language and social barriers that can make it hard for Japanese people to contribute to the development process. For instance, a strong desire for harmony and respect for politeness can lead to a Japanese person remaining quiet on a working group call and missing their chance to speak.
A good global standard may need to include cultural sensitivity and “cultural explainability”, whereby the system owner should assess and justify how their designs account for cultural differences in the audiences they wish to sell to (e.g. Eastern users).
Greater consideration of, and adaptation to, different cultures in system design may engender trade-offs against increased expenses for research, functionality and so on.
AI literacy of potentially affected stakeholders (e.g. users) should be a priority of systems developers and corresponding standards. This should account for cultural factors, too.
Where interoperability is traditionally a core feature of effective and usable standards, there is a related need for these new ethical standards to establish an ethical (and cultural, social, etc.) “interoperability”. This could manifest in features such as shared taxonomies, universal models, general principles, and so on, but is likely to be limited in scope.
While the whole planet may never agree on truly universal ethical standards, we should strive for a high minimum threshold of restrictions and good practices, and perhaps then provide scope for contextual (e.g. national) add-ons to adapt these global soft-law mechanisms.
Ethical standards (and other governance mechanisms) may include assessment and oversight requirements – such as human-in-the-loop monitoring, or third-party certification – but these are not equally valued between Eastern and Western cultures, where there are differences in norms regarding trust, respect, dignity and suchlike. Instead, there can be a preference for trusting the system developer to respect laws and standards without the shame and intrusion of an overseer.
Next Steps
There’s a lot to digest here! We have already begun drafting a white paper, which will explore the most salient insights in more detail. And we will feed them into the IEEE P7014.1 working group, which will kick-off at last later this month.
If you’re interested in contributing to the development of the IEEE 7014.1 global standard for ethics in empathic GPAI, please reach out. More on the group’s website here.