Robotics has solved locomotion and manipulation. The face is still a black screen, or worse, an uncanny copy of ours.
The uncanny valley isn't a law of nature. It's what happens when the only strategy is imitation.
A machine doesn't need our face. It needs its own. Legible, expressive, non-human.
Not the answer. A new way to reframe the question.
A biological structure, a cultural object, a social interface, a psychological anchor. We are born wired to detect it. We read faces before we read words. We project identity and emotion onto them constantly.
Hundreds of images collected across cultures, art, science and technology. From ritual masks to emoji, from Ekman's FACS (5,000 catalogued muscle combinations) to pareidolia proving we see faces in power outlets and clouds. If a face can emerge from so little, the question isn't how to replicate one. It's how to design a new system of signals.
Cuttlefish shift colour in milliseconds. Insects signal with antenna vibrations invisible to us. Birds display plumage calibrated to specific audiences. Every species evolved a communication system tuned to its own body.
If we're building new bodies, we should draw from this richness to go beyond the poverty of screen-based UI.

Early in the process I realised there was no sense in designing one specific robot. It was a matter of building a system. From a system can emerge diversity that goes beyond our preshaped imaginations.
Each specimen is described by a genome: structural type, surface material, movement class, chromatic system. Combinations produce families, variations produce individuals. This project focuses on heads. The system is designed for the whole body.
Rather than asking a robot to feel joy or pain and display it on command, what if its internal states could trigger physical expressions we can read in our environment? Battery level, CPU load, network latency, task completion. No more black-boxed robots.
The dynamogramme is a live reaction-diffusion simulation shaped by the robot's traits. Six parameters drive the output and open a six-dimensional space of possible expressions, markable by humans or owned by AI.
The first three specimens born from this system,
both digital and tangible.
Temperature as language. Liquid crystal paint on hand-embossed aluminium. Six independent Peltier zones shift colour in ~2 seconds. Each thermal state produces a pattern the viewer reads like an inkblot.
The face doesn't tell you what it feels. It asks you what you see.
Flow as language. Fluorescent liquid circulates through silicone channels driven by peristaltic pumps. Fast reads as agitation, slow as calm. Under UV the fluid glows. A chromatophore system built from medical-grade tubing.
You don't read the shape. You read the rhythm.
Tension as language. 0.5mm nitinol wires contract at 70°C, animating antenna-like structures drawn from insect signalling. Auxetic geometries amplify micro-movements into visible surface deformations. Near-silent. No motors.
The face doesn't move. It breathes.
The body all three faces share. 3-axis motorised neck (pan, tilt, roll), three iterations, designed to be as compact as possible. 12 PLA-printed parts, 3 MG996R servos, 30 minutes to assemble.
Face tracking drives orientation in real-time. The neck follows your gaze, mirrors your movement, or looks away depending on the emotional state. Swap the face, keep the brain.
The system I built step by step following my needs for interaction and control.
Solo Research, Engineering & Design
ENSCI Les Ateliers graduate. I design expressive interfaces for machines. From shape-memory alloy actuators to real-time expression engines, this project exists because vision will define how we integrate new forms of life into our world.
Next step: generative AI driving autonomous robot expression. Looking for a team that takes the face as seriously as the body.
Tech Stack
Software: SolidWorks · Unreal · C++ · Python · React · Max MSP
Hardware: Arduino · ESP32 · Silicone Molding · SLA · Nitinol
Full-scale humanoid skeleton in bamboo. Biomimetic robotics exploring natural fibres as structural material.
AI facial manipulation. How generative models reconstruct and falsify human faces. The ethics of synthetic identity.
Graphic novel exploring the boundary between lived experience and constructed narrative.
Paris-Saclay research. Immersive reconstruction of historical sites via photogrammetry and spatial computing.