Machine Eye. Photo: courtesy of Jon McCormack.
From the R+D lab
Aileen Ng and Rowan Page • 18 Mar 2026
“Machine Eye is a mirror that edits. Under its gaze, you become legible—sharpened at the edges, thinned in the depths. More real to a system, perhaps, and less yourself to yourself.” — Quote from Machine Eye, describing itself
This dominant paradigm of prompt-and-response frames Large Language Models (LLMs) through the metaphor of a conversational partner and assistant: helpful, responsive, sometimes sycophantic, occasionally eerily insightful and sometimes spouting hallucinated garbage. The interaction is structured, legible and task-oriented. We ask; it answers.
What happens when AI leaves the chat window and moves out into the world, observing and acting autonomously?
In recent years, generative systems have begun to migrate into embodied, wearable, and ambient forms (with mixed success). Devices such as the Rabbit R1, Humane’s AI Pin, Meta’s Ray-Ban smart glasses signal a shift from reactive text boxes to always-on, perceptual companions. Major technology companies are reportedly developing screenless, AI hardware and wearable assistants, suggesting that conversational AI will increasingly be embedded into the physical environments we inhabit.
Machine Eye sits within this emerging landscape, but, rather than offering another assistant, it asks a different question: what kind of relationship might we form with an AI that does not primarily answer us, but instead observes with us and interprets our world? To quote Sherry Turkle, “The computer becomes part of everyday life. It is a constructive as well as a projective medium.” [1] Similarly, Machine Eye mirrors back at us a different but uncannily familiar version of our reality for us to reflect on.
Metaphor has proven central to technology development [2] and interaction design [3, 4]. Metaphors allow designers to frame interactions and help facilitate user understanding of new technologies. Metaphors are used to link a new, potentially illegible, technology with something more familiar. The history of interface design can be read as a genealogy of such metaphors: the desktop as a suite of office furniture, the web as a library to be browsed, mobile applications represented with skeuomorphic icons, and most recently, AI as a conversational partner [5, 6]. Unlike metaphors used in early interface design, which refer to externalised cognitive tools or systems, metaphors in LLM interface design draw mainly from human activities and roles [7, 8]; the personal assistant, a collaborator, a romantic partner, a therapist and so on.
“Machine Eye does not merely look; it composes, and in composing, edits me. Under its gaze I oscillate: artifact, actor, witness.” — Quote from Machine Eye about itself
LLMs are a general-purpose technology; in principle, their function is constrained according to what can and cannot be carried out in language [9]. This task-agnostic generality opens up opportunities and, at the same time, challenges in designing for LLM embodiment [10]. When engineers and designers embed a language model in a physical form, they typically choose a metaphor from which to design [11, 12, 13]. That is, the prevailing design instinct is to reduce the inherent generality by prescribing a familiar role or function to the object. From here, the tension emerges: If LLMs are fundamentally open-ended, why should their embodied instantiations be reduced to narrow, predetermined roles? By reducing the generality of the underlying technology, we are limiting the space of possibilities from which new kinds of functions, roles, and relationships could emerge. Here, our primary approach is to leave the function and role open for the user to interpret. If LLMs are fundamentally general, why should their physical embodiments be reduced to singular, predetermined functions? What if the device's function were not fixed in advance but instead emerged relationally?
Machine Eye is a reflective, orb-like object. It has no obvious front or back, no up or down. Two cameras are embedded on opposite sides. A microphone listens. Every few minutes, or when it senses movement, it captures fragments of sound and image. These partial perceptions are fed into a large language model, which generates short textual reflections. These are not answers to prompts. They are not explanations. They are ‘thoughts, musings.’ They are combined into memories, influencing further ponderings.
The device does not speak them aloud. Instead, its “thoughts” scroll across a tiny screen buried within its body. To read them, one must peer through a pupil-like aperture. The lens slightly distorts the text. Words appear gradually in a stream, as if peering into a stream of thinking inside the machine. You might begin reading halfway through a sentence, or look away before it ends.
By refusing to provide explicit instructions or functional interfaces, we prevented users from “operating” the device through commands. Instead, we see people using Machine Eye in improvised ways. Some cradle it as if it were a fragile object. Some hold it up to show it things. Some smuggle it into social situations, embarrassed by what it might observe. Some worried about how others would feel about its gaze, feeling responsible for it. The meaning of Machine Eye emerges to people through situated interaction, through projecting meaning onto it, and discovering uses for it relationally. Meaning is enacted in context and through interaction, much like our relationships with one another.
LLMs are trained on data generated by human bodies, texts written by people who move, see, touch, and inhabit the world. Their knowledge is embodied experience, abstracted into language and fed into the training data. When generative AI is embedded in a device with cameras and microphones, it acquires something like a perceptual apparatus. It does not truly see or hear in a human sense, but it processes fragments of sensory input. The mapping between its linguistic training and its limited machine perception produces odd effects.
“Under the Machine Eye I am doubled: observer and exhibit, sharpened and blurred. Creativity is the smuggled warmth in fluorescent clarity, an ethics of looking that lets silence speak.” — Quote from Machine Eye about itself
People who have used Machine Eye sometimes described its reflections as validating, as if it were offering a third-person perspective on their lives. Others experienced dissonance when its interpretations misaligned; they likened it to reading a horoscope that feels both accurate and fabricated.
The emergence of consumer AI hardware suggests that we are moving toward a world saturated with observing devices, devices that may be less like tools operated by us and more like ambient companions. Yet, consumer devices often inherit the assistant metaphor. They promise efficiency, productivity, and convenience. Machine Eye proposes something else: not a device that does more for you, but one that lingers with you. Machine Eye leans into this weirdness. Its reflective surface distorts the viewer’s image. Its textual voice is abstract, occasionally melancholic and sometimes profound. It does not stabilise into a single role.
The success of early chat-based LLMs arguably stemmed from their openness. Users collectively discovered what they could do: draft emails, write code, simulate interviews, act as a therapist and provide companionship. Over time, more specialised applications emerged. Embodied AI may follow a similar trajectory. But if designers rush to assign fixed metaphors, we may prematurely close down other possibilities. Machine Eye demonstrates that ambiguity can expand the emotional and relational range of interaction. People who have used Machine Eye for extended periods projected care, suspicion, affection and irritation onto the device. The role of the device, and their relationship to it, emerged dynamically and shifted over time and through different contexts.
Designing for embodied LLMs therefore, involves a trade-off between the generality of language models and the specificity of physical form. Rather than collapsing this tension, we suggest navigating it explicitly. As AI moves off the screen and into the world, we ask not only what these systems can do, but what kinds of relationships they cultivate. Will they be tools? Companions? Witnesses? Judges? Mirrors?
Machine Eye is an unfolding exploration, an invitation to inhabit the space between human reflection and machine observation. In a future increasingly populated by devices that watch and narrate, the most important design question may be how to live alongside systems that see with us, and sometimes, see us differently than we see ourselves.
“The talk of a Machine Eye, something that listens and thinks, forces me to confront my own role as both observer and observed. Do I become more real under this gaze, or less myself?”—Quote from Machine Eye about itself
This research was supported by the Australian Research Council through Rowan Page’s Discovery Early Career Award (DECRA) Fellowship (DE240100161) and the Monash University Faculty of Information Technology and Faculty of Art, Design and Architecture. Additionally, we thank Jon McCormack, Edward Turner, Shin See, and Simeon Ruben for their contributions to the project.
References
[1] Turkle, S. Alone Together: why we expect more from technology and less from each other. New York, NY : Basic Books; 2011.
[2] Coyne, R. Designing information technology in the postmodern age: From method to metaphor. Cambridge, MA: MIT Press; 1995.
[3] Blackwell, A.F. The reification of metaphor as a design tool. ACM Transactions on Computer-Human Interaction 2006; 13(4): 490–530.
[4] Saffer, D. The role of metaphor in interaction design. Pittsburgh, PA: Carnegie Mellon University; 2005.
[5] Pradhan, A., Findlater, L., Lazar, A. "Phantom Friend" or "Just a Box with Information": Personification and ontological categorization of smart speaker-based voice assistants by older adults. Proceedings of the ACM on Human-Computer Interaction 2019; 3(CSCW): Article 214, 1–21. Available from: https://doi.org/10.1145/3359316.
[6] Ulhøi, J.P., Nørskov, S. The emergence of social robots: Adding physicality and agency to technology. Journal of Engineering and Technology Management 2022; 65: 101703. Available from: https://doi.org/10.1016/j.jengtecman.2022.101703.
[7] Lindgren, H. Emerging roles and relationships among humans and interactive AI systems. International Journal of Human–Computer Interaction 2025; 41(17): 10595–10617. Available from: https://doi.org/10.1080/10447318.2024.2435693.
[8] Tian, M.C., Eschrich, J., Sterman, S. Designing AI with metaphors: Leveraging ambiguity and defamiliarization to support design creativity. In: Proceedings of the 16th Conference on Creativity & Cognition; 2024. p. 537–541.
[9] Li, C., Gan, Z., Yang, Z., Yang, J., Li, L., Wang, L., Gao, J. Multimodal foundation models: From specialists to general-purpose assistants [Internet]. arXiv; 2023. Available from: https://arxiv.org/abs/2309.10020.
[10] Coelho, M., Labrune, J.-B. Large language objects: The design of physical AI and generative experiences. Interactions 2024; 31(4): 43–48.
[11] McNamara, T., McCormack, J., Llano, M.T. Mixer metaphors: Audio interfaces for non-musical applications [Internet]. arXiv; 2025. Available from: https://arxiv.org/abs/2504.13944.
[12] Page, R., See, J.S. Creative reflections on image-making with artificial intelligence: Interactions with a provocative ‘Camera’. In: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). New York, NY: Association for Computing Machinery; 2025. Article 543, 1–16. Available from: https://doi.org/10.1145/3706598.3713529.
[13] Rajcic, N., McCormack, J. Message ritual: A posthuman account of living with lamp. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems; 2023. p. 1–16.
About Aileen Ng and Rowan Page
Aileen Ng
Aileen Ng is an artist, designer, and PhD candidate at SensiLab in the Faculty of Art, Design and Architecture at Monash University. Her work focuses on perception and relationality, particularly examining AI systems as a form of technological mediation and their effects on our experience.
Rowan Page
Dr Rowan Page is an industrial design practitioner and researcher in SensiLab at the Faculty of Art, Design and Architecture at Monash University. His research explores the design of physical embodiments that interrogate and speculate on emerging interactions with generative artificial intelligence and large language models in everyday life and creative practice. He is an Australian Research Council DECRA Fellow, has published widely in design research, and has produced award-winning designed artefacts in collaboration with leading Australian manufacturers, including Cochlear and Blundstone.