NCM is situated on the lands of the Wurundjeri Woi-wurrung people. We pay respects to them, especially their Elders and storytellers, as well as all First Peoples, nationwide. NCM acknowledges that communication technologies have a long history here, far longer than European occupation.

A Robot of Many Minds: When Algorithms Take Human Shape

Knowledge hub

Photo: Casey Horsfield, NCM, 2025.

Essay

Sarah Schömbs and Associate Professor Wafa Johal • 18 Mar 2026

Artificial Intelligence (AI) has slipped into our everyday lives with quiet persistence. It executes our tasks, knows our information and shares our space. We wake to its suggestions, navigate by its guidance and entrust it with questions we might once have asked a close friend. Its algorithms have arrived not with fanfare, but through gradual, almost imperceptible integration into the fabric of daily existence.

As we face global, existential challenges, the ability to understand AI’s role in our lives has never been more critical. When does AI serve as a tool? When does it act as a FRIEND? And when does it cause harm, whether through deliberate design, unintended error, or misuse?

Walking the tightrope between AI risks and benefits

In a world of rapid technological acceleration, benefit and harm can be difficult to distinguish. Large language models can make certain tasks faster and more accessible for many people but they also carry risks: information hazards, discrimination and misinformation with sometimes detrimental real-world consequences. Because these systems work by predicting the most likely next word based on patterns in the data they were trained on, rather than by truly understanding the world, they may leak confidential information, misrepresent groups and generate falsehoods that ripple outward through, for instance, our social networks [1].

As AI moves beyond conversation into action, these tensions deepen. Rather than simply answering questions, AI systems can now operate on our behalf in open-ended environments. They can book flights [2], sell items on marketplaces [3,4], browse the web, access files and carry out long chains of tasks with little human input. This capacity to act autonomously in the world, to plan, perceive and execute actions and decisions across multiple steps, is what researchers refer to as being ‘agentic’. The same capabilities that can democratise knowledge and break down language barriers are the ones that, without adequate human oversight, take consequential actions users may not fully understand or be able to oversee [5,6]. As AI adoption spreads, what begins as an individual risk can quickly become a systemic one.

Navigating this landscape of AI risks and benefits requires people to become critical consumers of AI technology, able to question its intelligence and trustworthiness [7], weigh its benefits against its risks and judge when, if, and how to place their trust in it.

In multi-agentic systems, multiple algorithmic entities interact in parallel, each guided by objectives and constraints we rarely see. It is not just one AI that operates on our behalf, but multiple, like a personal team of specialised assistants. Due to this multi-agent nature, these systems now accomplish even more complex tasks with greater performance [8]. Photo by Marie-Luise Skibbe, 2026.

Understanding can shift power and empower users to reach informed decisions

To mitigate these AI risks, the need to make AI systems understandable grows urgent. Most people interact with AI as they would with a knowledgeable friend, asking a question and expecting a reliable answer. But AI is not a deterministic, all-knowing intelligence that reliably reflects what is true about the world. Rather, it processes variables, weighs possibilities, and generates responses based on (often biased) patterns in data. In so called ‘multi-agentic systems’, this complexity grows as multiple algorithmic entities interact together and in parallel to accomplish a task. However, the user usually encounters a single point of contact, a voice or a chat interface, often unaware of the layers of decision-making and actions unfolding in the background as multiple AI agents collaborate invisibly behind it. Transparency and system understanding emerge as central challenges [9].

To mitigate these AI risks, the need to make AI systems understandable grows urgent. Most people interact with AI as they would with a knowledgeable friend, asking a question and expecting a reliable answer. But AI is not a deterministic, all-knowing intelligence that reliably reflects what is true about the world.

What happens when we lift the curtain? How can we make increasingly complex AI systems comprehensible? In A Robot of Many Minds, we took inspiration from the most intuitive reference point available to us: the human form.

People naturally attribute human-like characteristics to inanimate objects. Psychologists call this anthropomorphisation [10]. This instinct helps us make sense of the world and provides cognitive scaffolding for the unfamiliar. Researchers across fields such as human-robot interaction and information visualisation have drawn on this very capacity as a design principle. By giving systems, interfaces, or data representations human-like qualities, designers tap into cues that people already know how to interpret. A smile denotes something positive, while a sad expression connotates something bad. A mouth, as a design affordance, carries the implicit promise of dialogue, one that primes users to expect voice, speech, or conversational input as a natural mode of engagement. The result is that something abstract or unfamiliar becomes easier to read, to understand, and to engage with. Notably, the concept of AI agents itself is inherently anthropomorphic, as we speak of their ‘goals’, their ‘reasoning’, their ‘decisions’.

In A Robot of Many Minds, we physically embody different AI agents within a multi-agentic system and gave each of them a distinct appearance and voice. When a particular agent reasons or take over, the robot embodies that agent, making visible what would otherwise remain hidden. Suddenly, the AI is no longer an abstract ‘something,’ but an entity that occupies your space, that meets our eyes, that waits for our response. This embodiment invites questions we might not ask of a text box or voice assistant: would I trust this entity with this task? What makes someone, or something, trustworthy? These are the questions we ask about people. The critical assessments we make when we entrust a friend, a colleague, or approach a stranger.

This embodiment invites questions we might not ask of a text box or voice assistant: would I trust this entity with this task? What makes someone, or something, trustworthy? These are the questions we ask about people. The critical assessments we make when we entrust a friend, a colleague, or approach a stranger.

Anthropomorphism and embodiment as a path to AI Transparency

This is the premise behind A Robot of Many Minds: to embody complex AI systems through a humanlike robot that physicalises an abstract concept and shares our space, ‘someone’ we can see and potentially touch. Each AI agent is assigned a distinct appearance, voice, and personality within a shared physical robot form, so that users can begin to tell them apart, understand what each one does, and build a more informed mental model of the system. Appearance, tone of voice and facial expressions become tools to convey different roles, capabilities and responsibilities. When one agent hands over to another, the robot makes that shift visible through motion and expression, turning what is conventionally hidden from view into something a person can witness and follow. In this way, we begin to lift the curtain and invite people into a more informed engagement with the AI. Because walking the tightrope between the benefits and risks of AI begins with understanding the system we are interacting with.

Importantly, the design considerations open onto sociotechnical complexities. We must balance understanding with responsibility. When we use anthropomorphism as a deliberate design choice and draw on metaphors and cognitive scripts that people with no prior knowledge can grasp, we must also remain alert to what those choices carry with them. When we give abstractions a face and a voice, the choices we make about representation carry weight. They can illuminate, but they can also reinforce the very archetypes and stereotypes that cause harm.

A Robot of Many Minds represents an exploration of how physical embodiment, anthropomorphic design and transparency might render the invisible machinery of multi-agent systems visible, tangible, and understandable.

References

[1] L. Weidinger et al. "Taxonomy of Risks posed by Language Models". en. In: 2022 ACM Conference on Fairness Accountability and Transparency. Seoul Republic of Korea: ACM, June 2022, pp. 214–229. ISBN: 978-1-4503-9352-2. DOI: 10.1145/3531146.3533088.

[2] Z. Shao, J. Wu, W. Chen, and X. Wang. "Personal Travel Solver: A Preference-Driven LLM-Solver System for Travel Planning". In: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics. Vienna, Austria: Association for Computational Linguistics, July 2025, pp. 27622–27642. DOI: 10.18653/v1/2025.acl-long.1339.

[3] N. Goyal, M. Chang, and M. Terry. "Designing for Human-Agent Alignment: Understanding what humans want from their agents". In: Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery, May 2024, pp. 1–6. DOI: 10.1145/3613905.3650948.

[4] I. Y. Wang et al. Evaluating Bargaining Skills in Online Second-Hand Marketplace with LLM Seller Agents. 2025. URL: https://openreview.net/forum?id=TG8b8LmRsY.

[5] A. Chan et al. "Harms from Increasingly Agentic Algorithmic Systems". en. In: 2023 ACM Conference on Fairness, Accountability, and Transparency. Chicago IL USA: ACM, June 2023, pp. 651–666. ISBN: 979-8-4007-0192-4. DOI: 10.1145/3593013.3594033.

[6] A. Plaat, A. Wong, S. Verberne, J. Broekens, and N. Van Stein. "Multi-step reasoning with large language models, a survey". In: ACM Computing Surveys (2025).

[7] D. Long and B. Magerko. "What is AI Literacy? Competencies and Design Considerations". en. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Honolulu HI USA: ACM, Apr. 2020, pp. 1–16. DOI: 10.1145/3313831.3376727.

[8] D. Gao et al. AgentScope: A Flexible yet Robust Multi-Agent Platform. arXiv:2402.14034 [cs]. May 2024. DOI: 10.48550/arXiv.2402.14034.

[9] S. Schömbs, Y. Zhang, J. Goncalves, and W. Johal. "From Conversation to Orchestration: HCI Challenges and Opportunities in Interactive Multi-Agentic Systems". en. In: Proceedings of the 13th International Conference on Human-Agent Interaction. Yokohama Japan: ACM, Nov. 2025, pp. 158–168. DOI: 10.1145/3765766.3765795.

[10] A. Waytz, J. Heafner, and N. Epley. "The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle". In: Journal of Experimental Social Psychology 52 (May 2014), pp. 113–117. DOI: 10.1016/j.jesp.2014.01.005.

About

Sarah Schömbs

PhD Student, School of Computing and Information Systems, University of Melbourne

Sarah Schömbs is a final year PhD student at the University of Melbourne researching human-centred risk communication in Human-Agent Interaction. She examines how users perceive risks communicated through agents, including system uncertainties, with a focus on agentic systems. Sarah was awarded the Diane Lemaire Scholarship (2024), and Rowden White Scholarship (2023) by the University of Melbourne.

Wafa Johal

Associate Professor, School of Computing and Information Systems University of Melbourne

Wafa Johal is an associate professor at the University of Melbourne. Her research combines concepts and methods from AI and HCI to create acceptable and useful robots to learn, teach and collaborate using multimodal interaction. She was received the prestigious Discovery Early Career Researcher Award (DECRA) by the Australian Research Council (ARC) in 2021. She is now an ARC Future Fellow (2026-2030). She is a member of the ACM/IEEE Human-Robot Interaction Conference Steering Committee.