Different Song; Same Dance – Ethical AI Perspectives from Tanzania, Zanzibar and Africa
AEGIS Workshop at University of Dar Es Salaam
Tanzania, Zanzibar, Africa, and their myriad constituent cultures, have much to teach us about fostering safer, more inclusive technology. With a story that is both ancient and new, these regions are overdue their rightful participation in the emerging world of human-AI partnerships.
In May 2025, the AEGIS team co-hosted two workshops in Tanzania, one in the capital with the University of Dar Es Salaam, and the other with the Law School of Zanzibar. This was our final research visit of the project before the conclusion of our funded work. Continuing the theme of the AEGIS project, participants from a blend of academia, industry, policy and civil society discussed how the emergence of empathic AI partners should be considered in a regional context.
And how is that?
Preface: framing the cultural context
Well, once again, it’s complicated. Africa is many things. Tanzania alone boasts over 120 tribes, each with its own language. It is within the top 30 most ethnically diverse countries on Earth, alongside 24 other African nations. This inherent diversity must be forefront in any attempt to sketch a profile of a region like Tanzania, or the African continent; each is many things.
Our goal from the outset has been to hear ethical perspectives (on empathy and human-AI partnering) from underrepresented regions, but of course this theme is immediately problematic. Who is more or less underrepresented? What do we mean by region? To what extent can one contiguous land area represent a united group of people anyway? Generalisation, abbreviation and omission are inevitable. While keeping this cautionary preface in mind, engaging with Tanzanian participants, we discover aspects that are unique while also resonating with other regions we have visited.
Let’s try to illuminate some…
The shared cultural identity of Tanzania, and indeed much of sub-Saharan Africa (insofar as any singular traits can be claimed), is typical centred on collectivism, highlighting principles such as:
Ubuntu (interconnectedness)
Communitarian emphasis
Primacy of the family unit
Duty
Respect
The first region we examined under the AEGIS project was Japan (following earlier research by our extended team) and it’s easy to see a striking similarity in the collectivist nature of that society, but clearly these are two very different places and peoples. The placement of each country on a human-development scale for instance, is an almost exact mirror at either end of the chart. Yet, as we discussed in our Japan whitepaper, each region’s collectivist aspects similarly contrast those of the cultures within which the world’s leading foundational models (GPT, Gemini, etc.) were created (e.g. USA, “the West”) – and thus deserve inclusion in technology planning.
Imagining communitarian AI
The lived reality of Tanzanian culture was illustrated most vibrantly in an exercise led by Dr. Irene Isibika from Mzumbe University, in which she split us into teams to act out scenarios of imaginary AIs stumbling on local nuances. The picture presented was one in which families and the surrounding immediate community have a big voice in a person’s life, regarding their relationships, career choice and so on. The performances on our little stage starkly animated the cultural blinkers that western-trained language models wear. Misalignments (over issues such as child discipline, intimacy and same-sex relationships) can cause serious miscommunication between AI and users in African contexts.
Teams practising for their unexpected performances
This kind of participatory research & design can be as enlightening as it is fun (if a little scary!).
In communitarian society, a person is likely to define their identity in the context of their roles within their key community groups – as an employee, as a brother – more so than in individualistic societies. In turn, they tend to derive their duties from that context. As intelligent machine systems become ever more integrated with our lives, and develop ever more intimate relationships with us, where do such cultural norms play into the design of those systems, especially considering the western origins of the leading foundational models? Their normal operation is through a single-user interface, collecting data on that individual and gradually tailoring outputs to them. Can such interaction be adapted to a collectivist user-base (e.g. where privacy and human dignity are interpreted differently)? Group interaction rather than individual. How would that be optimised, measured, regulated? What would the user experience (UX) be like? Perhaps, where western models are designed for individual outcomes – whether those are engagement, utility or wellbeing – similar metrics could be translated into communitarian usage: the wellbeing of the surrounding social group; the level of uptake by the community.
Nurturing AI literacy and practice
At this time, AI usage, literacy and expertise are relatively limited in Tanzania. Counsel Kheri Mbiro from Breakthrough Attorneys explained that AI is mostly used in Tanzania for banking and financing, academia, health, agriculture, telecommunication, and legal practice. There is a major lack of technical expertise, and most ordinary people are far from understanding the technology enough to have some agency in its adoption. And this extends right up to government officials. AI literacy and AI readiness may not yet even be quantified in much of Africa (as implied by this: UNESCO supports Consultations for AI Readiness Assessment in Tanzania). Critical awareness for ordinary people could help arm them to mitigate some of AI’s dangers and use it in more rewarding ways (although this is hardly a regional issue). To quote one of the Zanzibar workshop attendees, “Information literacy is supposed to be a human right. It’s high time, AI literacy should be a human right”.
Local data is sparse. Adding fuel to the existing fire of western- (and male-, white-, wealthy-, etc.) biased datasets on which much of modern digital services are built, the quality, volume and accessibility of data on minority and indigenous groups throughout the world is meagre. Tanzania and its many cultural groups are no exception. On top of this, we heard how research and support for science are largely confined to public institutions (e.g. universities) with little additional private or individual research, and funding is relatively low even where it exists.
Regulation, policy, standards, laws – these things too are thin on the ground in Tanzania. As we saw in Indonesia, the region has some history of adopting and adapting pioneering laws, which we could see happening with the likes of the EU AI Act. But caution was voiced about avoiding the Brussels Effect and looking instead to sources such as the African Union (AU) and Islamic tradition, as well as discussions on digital sovereignty in Africa and caution against Chinese investment and modern forms of colonisation.
Consider that mainland Tanzania’s religious mix is primarily Christian, with a significant Muslim minority, as well as many other (e.g. Animist) faiths, while Zanzibar is 99% Muslim. The island of Zanzibar deserves its own stories but suffice to say that, despite its size, it has rich traditions and significant impact on Tanzania and beyond, and claims to be undergoing a digital transformation. And while we can suppose that Christianity has had some influence on the dominant western AI models, Islam could add various instructive ethical concepts, such as Maslaha (relating to greater good / public interest). Of course the same goes for the ethical framings of other cultural, religions and social groups. They all can bring flavour and strength to the current, narrow framing.
Vibrant workshop at the Law School of Zanzibar
Localisation and adaptation
All the above issues point towards a need (and opportunity!) for localisation of the technology to better fit regional cultural characteristics. This challenge raises loud echoes of our recent interactions in Indonesia, as described in our post AI Localisation and Extreme Diversity: Insights from AEGIS Workshop, Jakarta, Jan 2025.
We began to ask what such localisation could look like and how it could be created, but this remains a rich area for further research and development (project, anyone?!). Some prominent insights from our discussions suggested that:
Each region or group should consider building a localised (e.g. African, Muslim) “layer” over existing (western) tools such as the foundational models, rather than attempting to build competitive equivalents. For instance: gathering indigenous datasets, maintaining ownership of them within the local community, and managing external access to that data.
Jobs could be created for overcoming language & culture issues. We heard that the likes of GPT had trouble even translating into Swahili, which is a major language, while Africa likely has well over 1,000 spoken languages, and Tanzania alone over 100.
Most of the Dar Es Salaam attendees, outside our venue – the Confucius Institute at University of Dar Es Salaam
Participation in the great global game
In conclusion, let’s try to paraphrase multiple participants from our two workshops:
Africa is not a spectator in the global race to define AI. Now is the moment to educate, empower, and build – ensuring that emerging technologies are developed and governed with African languages, values, and lived realities at the centre.
Automating Empathy – A Short Film
With immense pride, and more than a few goosebumps, we can finally announce the release of our short animated film based on the work of the AEGIS project: Automating Empathy.
Watch our 6-min film and learn about its making on the Automating Empathy Film page.
AI Localisation and Extreme Diversity: Insights from AEGIS Workshop, Jakarta, Jan 2025
Insights from a cross-disciplinary workshop in Jakarta, Indonesia, to explore Indonesian perspective on AI companions and emulated empathy in human-AI partnerships.
Exploring Indonesian perspectives on automated empathy in Jakarta, Java, Indonesia
Indonesia is a world within a country! “One of the most diverse countries in the world”, it is an enormous source of both opportunity and complexity. The AEGIS team is in Jakarta, capital of Indonesia, to hear about this country’s extraordinary position in the emerging paradigm of human-AI partnering and automated empathy.
Watch the videos
Watch some highlights from the AEGIS workshop in downtown Jakarta.
A brief highlight from our interview with Arif Perdana from Monash University, Indonesia. Arif talks about the uniqueness of Indonesian culture in the context of AI partnerships.
Workshop context
The workshop was a collaboration with Action Lab Indonesia and the Data and Democracy Research Hub at Monash University, Indonesia, led by Arif Perdana, and held at the JS Luwansa Hotel and Convention Center in downtown Jakarta. It was attended by 22 people, bringing expertise from academia, tech industry, government, journalism and civil society organisations. The theme was “to explore ethics of emulated empathy in human-AI partnerships”, particularly examining how the forthcoming IEEE 7014.1 recommended practice could be informed by Indonesian perspectives.
Hearing participant input on the workshop issues
Besides the AEGIS team, we were treated to rich presentations from:
Ayu Purwarianti, from Institut Teknologi Bandung (on Technical challenges in developing culturally aware empathic systems).
Sinta Dewi, from Universitas Padjadjaran, Dr. Ir. Lukas, from Unika Atma Jaya, NLP Communities/AI Society, and Fahrizal Lukman Budiono, from Ministry of Communication & Digital Affairs (Komdigi) (on The balance between innovation and regulation in empathic AI).
Derry Wijaya, from Monash University, Indonesia (on Cross-cultural challenges in interpreting and responding to emotions)
Irvan Bastian Arief, from tiket.com, and Harun Mahbub Billah, from Liputan6.com (on Future trends and potential developments in commercial emotive).
We were treated to excellent speakers
Workshop insights
The top line: Indonesia demands hyperlocalised AI
While being a single country of 280 million people, Indonesia is strewn across 17,500 islands in a chain stretching over 5,000 kilometres. Over 600 ethnic groups call Indonesia home, conversing in over 700 languages, plus many more dialects. While Islam is the leading religion, it is one of six officially recognised religions, alongside hundreds of indigenous spiritualities. This is a country as diverse as a continent. Cultural perspectives and norms, and their consequent sensitivities, vary dramatically across several vectors relating to issues such as faith, social hierarchy and gender identity. Indonesia is, in a word, diverse.
For any technology as customisable and personalisable as AI, such a sweeping mix of cultures is a major challenge. Regulation and design could restrict AI from offending some sensitivities, as is already the case (e.g. cursing, hate speech) but at what point do such limitations impede reasonable access and functionality? The alternative route is one of variation and adaptation to accommodate the needs of different groups and individuals, or by use case (e.g. medical, education) but this would require sufficient customisation of model training, user access and other features. This would not be simple, especially with the leading AIs already being trained largely on western data, with poor representation of eastern nuance, let alone small cultural groups with low digital access or literacy.
Our Indonesian participants described their fellow nationals as exceptionally keen to innovate, and to adopt new tools and practices. With usage of AI companions and ghostbots being relatively high, “We love technology,” one said. “We embrace it. But we are not cautious.” Overconfidence is apparent in a global survey, the IPSOS AI Monitor 2024, which reported that Indonesians topped the world in their confidence of a “good understanding of what artificial intelligence is” while simultaneously ranking near the top for trusting that “companies that use artificial intelligence will protect my personal data” and further trusting “artificial intelligence to not discriminate or show bias towards any group of people”. Our expert participants painted a picture of Indonesia as early adopters, wired for change, yet vulnerable to misaligned influences from powerful new technologies.
General Insights
Digital and AI literacy is generally low, despite the enthusiastic adoption of new technologies mentioned above. This literacy issue may extend to tech policymakers as well.
Anthropomorphism was raised frequently in our conversations in connection to eastern animism as well as the above-mentioned potential for vulnerability to emotional manipulation as AI becomes more empathic. Opportunity should come from sensitisation to values-based design, where subjectivity is present in objects.
The old trade-off between regulation and innovation deserves particular attention in the Indonesian context. Questions were raised of strong versus weak regulation, to facilitate the exceptional growth that is desired in the country while doing so safely and sustainably (not unlike UK plans for economic and AI growth).
– Note: For more on Indonesia’s national tech objectives, see Visi Indonesia Digital 2045.Indonesia has a precedent of adopting international policies when they are suitable, rather than trying to write new content. For instance, GDPR has greatly influenced Indonesia’s Personal Data Protection Bill (2022). So for AI, there is likely a readiness to onboard good policy and standards from outside the country (if available), with IEEE P7014.1 receiving explicit mention.
Just a few days ago, Indonesia joined the BRICS countries as a full member. This raises questions of how the country will adopt and develop tech policy, innovation and entrepreneurship.
Presentation content ranged between technical and social
The bottom line: what should be done?
These workshop insights point to an urgent need for both technology regulators and producers to accommodate the nuanced needs of cultural and regional groups that exist across this sprawling archipelago. How can the global community help to enable this? It seems to follow that we should support decentralisation of AI development , with deep configuration to local contexts, while minimising intrusive personalisation methods. This is easier said than done, but the tenacity is present in Indonesia.
Increasingly intimate and potent human-machine partnerships are encountering an Indonesia full of energy and dynamism, keen for innovation, open to change, but rife with the complexities that come from serving such a diverse population.
Most of the participants with the AEGIS team
Ecological Empathy in the East and West: Insights from AEGIS Workshops, Tokyo, June 2024
Konnichiwa! The Project AEGIS team is in downtown Tokyo, kindly hosted by partners at the National Institute of Informatics (NII). We are here to explore regional cultural perspectives and ideas regarding technology at the intersection of emulated empathy and human-AI partnering.
The central activity for this stage of the project is a pair of all-day workshops at NII headquarters with a diverse, multidisciplinary mix of attendees. Here’s an overview of workshops.
– Watch a short video of the Japan workshops.
Konnichiwa! The Project AEGIS team is in downtown Tokyo, kindly hosted by partners at the National Institute of Informatics (NII). We are here to explore regional cultural perspectives and ideas regarding technology at the intersection of emulated empathy and human-AI partnering.
The central activity for this stage of the project is a pair of all-day workshops at NII headquarters with a diverse, multidisciplinary mix of attendees. Here’s an overview of workshops.
Sumiko Shimo shares cultural insights with workshop attendees.
Workshop Context
There is much to learn from Japan’s relationship with technology. There are of course social and philosophical differences between the so-called Western nations, and Japan and its “ethically-aligned” neighbours (e.g. Taiwan). But more so, Japan has a particularly interesting history, shared attitude, and governance approach regarding computers, robots, and human interaction with technology.
At the risk of massive oversimplification, here are some of the main considerations that we had in mind, going into the start of the workshops:
In Japan there is much greater inerest in, and acceptance of, the role of technology in society.
The experience of Modernity between East and West is markedly different.
The science and business of AI, empathy and related topics are predominantly Western. Empathic AI models, for instance, are informed by psychological models that may be built on data from WEIRD (Western, educated, industrialised, rich, and democratic) researchers and participants.
If there is a key difference between Western vs Eastern societal philosophy and attitudes, it could be individualism vs collectivism, respectively.
For more on this background context, you can dig into earlier work by the Emotional AI Lab based on previous UK-Japan research here (in particular, see Mcstay, A. (2021) Emotional AI, Ethics, and Japanese Spice: Contributing Community, Wholeness, Sincerity, and Heart, Philosophy & Technology.)
Workshop Structure
Both sessions were 9-5 full days, each attended by ~16 guests from academia, tech industry, law & policymaking, civil society organisations, and consumer protection groups. Both days followed the same format but with some updated prompts and content for the second day based on outputs from the first. Our team and selected guests provided contextual talks about related work and societal issues. And in between them we gathered attendee insights from small-group sessions on the following themes:
Use cases: Group discussion on examples of use of these technologies.
Global versus local ethics: Is global governance too Western and what globally can be learned from E. Asia?
Ethical issues: What ethical considerations arise from the use cases we have listed?
That’s great, but how do we apply it?: What guidance and restrictions might be included in an ethical standard for these technologies?
Special thanks to our guest speakers: Frederic Andres, Sumiko Shimo, Amy Kato, Chih-Hsing Ho, Takashi Egawa.
Insights
The top line: an ecological approach
If we attempt to sum the workshop outputs into a single connecting theme, we could call this an “ecological” approach. The workshops repeatedly raised a need for both empathic GPAI and their corresponding standards to respect, account for, and adapt to the context of the human affected by the system (e.g. user). There are many dimensions to this “human context” and many ways in which this “ecological” approach can be enacted, some of which are listed in more detail below. And none of this should be surprising. However, our participants illuminated some instructive and challenging concepts that underpin this overarching theme of ecology.
Ecological and aesthetic empathy: As we will discuss in more detail in our forthcoming white paper, the workshops explored ecological empathy, which repositions humans and their emotional experience into a wider frame that includes the properties and “social life” of non-human things, both organic and inorganic. And we were taken a step further into the concept of aesthetic empathy, in which we feel into, and emotionally engage with, characters, artworks and other non-human entities, including AI.
Expanding on the ecological approach, we unearthed two related processes that stood out as connecting with many of the insights below. They highlight key considerations for systems developers or those assessing them:
Need for customisability (e.g. of the system’s empathic functions and communicating expressions), including scope for reducing or removing the system’s expressiveness or empathic features, and;
Need for interoperability (e.g. of the AI systems themselves or the ethical standards concerning them).
General insights
We found consensus in the following points (bolded for scan-reading, not for emphasis):
There are significant cultural differences between the “West” and Japan (and the “East”), and further differences between East Asian countries, and indeed within Japan. This cultural variation can be subtle and sensitive.
Current global governance (e.g. standards) is predominantly Western and lacking sensitivity or adaptability for regional factors.
Empathy requires respect for the subjective world of the other person (e.g. system user). This includes their cultural background, how they view other beings and objects (e.g. applying animism to non-living things), and their place in their community.
There was a challenge to the need for empathy in the first place, when;
Reduced expressiveness or empathic behaviour may be preferable (compare the emotive, Her-like character of Open AI’s GPT 4o with the more robotic tone of Google's Astra – examples here), or;
Other social, linguistic, etc. tools could do the job, such as politeness (an AI, as with a shopkeeper, could be polite without any attempt to empathise).
Comfort and willingness for intimacy and disclosure in interaction differ. This led to some scepticism about the scope for automated meaningful empathy.
To humanise or animise a technological system is much more normal and acceptable in Japan.
Japan has a different tendency towards trust. Broadly, there is greater trust in government, large corporations, and technologies. This trust interacts with high levels of respect, dignity, law-abiding, and the scale of potential fallout when that trust is broken.
This trust may need reconsideration in the face of the increasing market and societal access of Western media, technology and business in Eastern life.
Western approaches to ethical factors such as data privacy & protection, accountability, and transparency, should be adapted to account for Eastern attitudes, such as for a more ecocentric approach that considers wider society and the environment as key stakeholders (in addition to the individual).
General-purpose AIs (GPAIs) trained on Western data corpora can expose Eastern users to taboo subjects, so some level of regional filtering may be needed.
Deception (as a key ethical issue for empathic GPAIs – paper coming soon) is normal to some extent in all regions, and may be more acceptable – desirable, even – in the East in some contexts, such as where politeness or flattery are expected in the human-machine dialogue.
“Are you okay with this? …Are you still okay?” – Empathic GPAI have potential for long-term and evolving impacts. Thus, meaningful consent and other arrangements will need to be updated at reasonable frequencies, on an ongoing basis.
Empathic interaction produces a psychological “energy cost” for the participants. As such, it may be preferable to design the system to communicate in a succinct, mechanistic, robotic fashion, rather than one that is animated and gregarious (and empathic). Excessive use could lead to empathy fatigue.
A great deal of the meaning exchanged in real-world interaction is conveyed through metaphor, figurative language, abstract gestures, and indeed what's not said. This is already a challenge between humans (e.g. of different backgrounds) surely it is missing from our current AI systems.
Attendees of Workshop 1
Standards and empathic GPAI systems
Zooming in now with more specificity on standards development, particularly the emerging body of ethical standards and those covering empathic technologies, the outstanding insights were that:
The typical timing of standards development meetings (biased towards Western working time) makes it very difficult for people in Eastern regions to contribute.
Even if the time zone is accommodating, there are language and social barriers that can make it hard for Japanese people to contribute to the development process. For instance, a strong desire for harmony and respect for politeness can lead to a Japanese person remaining quiet on a working group call and missing their chance to speak.
A good global standard may need to include cultural sensitivity and “cultural explainability”, whereby the system owner should assess and justify how their designs account for cultural differences in the audiences they wish to sell to (e.g. Eastern users).
Greater consideration of, and adaptation to, different cultures in system design may engender trade-offs against increased expenses for research, functionality and so on.
AI literacy of potentially affected stakeholders (e.g. users) should be a priority of systems developers and corresponding standards. This should account for cultural factors, too.
Where interoperability is traditionally a core feature of effective and usable standards, there is a related need for these new ethical standards to establish an ethical (and cultural, social, etc.) “interoperability”. This could manifest in features such as shared taxonomies, universal models, general principles, and so on, but is likely to be limited in scope.
While the whole planet may never agree on truly universal ethical standards, we should strive for a high minimum threshold of restrictions and good practices, and perhaps then provide scope for contextual (e.g. national) add-ons to adapt these global soft-law mechanisms.
Ethical standards (and other governance mechanisms) may include assessment and oversight requirements – such as human-in-the-loop monitoring, or third-party certification – but these are not equally valued between Eastern and Western cultures, where there are differences in norms regarding trust, respect, dignity and suchlike. Instead, there can be a preference for trusting the system developer to respect laws and standards without the shame and intrusion of an overseer.
Next Steps
There’s a lot to digest here! We have already begun drafting a white paper, which will explore the most salient insights in more detail. And we will feed them into the IEEE P7014.1 working group, which will kick-off at last later this month.
If you’re interested in contributing to the development of the IEEE 7014.1 global standard for ethics in empathic GPAI, please reach out. More on the group’s website here.