mashup · face-tracking + 3D + macro + atelier
ENSCI Les Ateliers — Diploma Project

Tangible faces
for digital alterity

If a robot doesn't need to look human, what should its face be? A design research into generative robotic expression, where internal states become physical dynamics.

▶ Watch the film →
01 — Problem & Hypothesis

The Blank Face Syndrome

The Problem

Robotics has solved locomotion and manipulation. The face is still a black screen, or worse, an uncanny copy of ours.

The uncanny valley isn't a law of nature. It's what happens when the only strategy is imitation.

The Hypothesis

A machine doesn't need our face. It needs its own. Legible, expressive, non-human.

Not the answer. A new way to reframe the question.

02 — Research

What is a face?

A biological structure, a cultural object, a social interface, a psychological anchor. We are born wired to detect it. We read faces before we read words. We project identity and emotion onto them constantly.

Hundreds of images collected across cultures, art, science and technology. From ritual masks to emoji, from Ekman's FACS (5,000 catalogued muscle combinations) to pareidolia proving we see faces in power outlets and clouds. If a face can emerge from so little, the question isn't how to replicate one. It's how to design a new system of signals.

research · face · planche iconographique

The richness of animal signalling

Cuttlefish shift colour in milliseconds. Insects signal with antenna vibrations invisible to us. Birds display plumage calibrated to specific audiences. Every species evolved a communication system tuned to its own body.

If we're building new bodies, we should draw from this richness to go beyond the poverty of screen-based UI.

EthologyChromatophoresBiomimicry
research · signaux animaux
03 — Form Lab

Designing faces like nature designs species

Early in the process I realised there was no sense in designing one specific robot. It was a matter of building a system. From a system can emerge diversity that goes beyond our preshaped imaginations.

Each specimen is described by a genome: structural type, surface material, movement class, chromatic system. Combinations produce families, variations produce individuals. This project focuses on heads. The system is designed for the whole body.

Form Lab · interface capture
04 — Expression Engine

Making the invisible tangible

Rather than asking a robot to feel joy or pain and display it on command, what if its internal states could trigger physical expressions we can read in our environment? Battery level, CPU load, network latency, task completion. No more black-boxed robots.

The dynamogramme is a live reaction-diffusion simulation shaped by the robot's traits. Six parameters drive the output and open a six-dimensional space of possible expressions, markable by humans or owned by AI.

Internal StateTraitsDynamicsActuators
dynamogramme · reaction-diffusion live
05 — The Specimens

Flow. Temperature. Tension.

The first three specimens born from this system,
both digital and tangible.

SP-01
ThermochromicPeltier ×6Aluminium

Rorschach

Temperature as language. Liquid crystal paint on hand-embossed aluminium. Six independent Peltier zones shift colour in ~2 seconds. Each thermal state produces a pattern the viewer reads like an inkblot.

The face doesn't tell you what it feels. It asks you what you see.

Rorschach · portrait video loop
SP-02
MicrofluidicsUV FluorescentPeristaltic

Seiche

Flow as language. Fluorescent liquid circulates through silicone channels driven by peristaltic pumps. Fast reads as agitation, slow as calm. Under UV the fluid glows. A chromatophore system built from medical-grade tubing.

You don't read the shape. You read the rhythm.

Seiche · portrait video loop
SP-03
Nitinol SMAAuxeticSilent

Insect

Tension as language. 0.5mm nitinol wires contract at 70°C, animating antenna-like structures drawn from insect signalling. Auxetic geometries amplify micro-movements into visible surface deformations. Near-silent. No motors.

The face doesn't move. It breathes.

Insect · portrait video loop
SP-04
3-Axis12 Parts3 Servos30min Assembly

The Neck

The body all three faces share. 3-axis motorised neck (pan, tilt, roll), three iterations, designed to be as compact as possible. 12 PLA-printed parts, 3 MG996R servos, 30 minutes to assemble.

Face tracking drives orientation in real-time. The neck follows your gaze, mirrors your movement, or looks away depending on the emotional state. Swap the face, keep the brain.

CameraPythonOSCMax MSPESP32Head
Neck · exploded view or tracking video
06 — The Backbone

Architecture

The system I built step by step following my needs for interaction and control.

Architecture Diagram
07 — The Film

The whole research condensed into a single sequence

Vimeo embed · 16:9
08 — Logbook

7 months, from blank page to jury day

jun.25
Subject locked. Humanoid robot faces as non-verbal interfaces. 200+ reference images.
jul.25
First material experiments. Soft robotics, pneumatics, silicone casting. Most failed. All taught something.
aug.25
Face tracking pipeline live. Python + MediaPipe → OSC → Max MSP. Neck v1: functional but too bulky.
sep.25
Liquid crystal on aluminium confirmed. Rorschach concept locked. Peltier control working.
oct.25
Nitinol actuation validated. Auxetic geometry amplifying micro-movement. Seiche microfluidics v1.
nov.25
Neck v3: compact, 3-axis, fits all specimens. Expression engine architecture finalised.
dec.25
Full system operational. Web interface, Unreal twin, all three specimens responding in real-time.
jan.26
Diploma defended, January 15th. Jury reaction: "frissons."
09 — References

Standing on the shoulders of

10 — Profile

The future of AI embodiment
is a creative challenge

Solo Research, Engineering & Design

ENSCI Les Ateliers graduate. I design expressive interfaces for machines. From shape-memory alloy actuators to real-time expression engines, this project exists because vision will define how we integrate new forms of life into our world.

Next step: generative AI driving autonomous robot expression. Looking for a team that takes the face as seriously as the body.

Tech Stack

Software: SolidWorks · Unreal · C++ · Python · React · Max MSP

Hardware: Arduino · ESP32 · Silicone Molding · SLA · Nitinol

photo · 5:3
— profile.json —
schoolENSCI Les Ateliers
locationParis, France
roleResearch · Eng · Design
statusopen to opportunities
Selected Background

Previous work

Archive

Bamboo Humanoid

Full-scale humanoid skeleton in bamboo. Biomimetic robotics exploring natural fibres as structural material.

BiomimeticsBamboo
Bamboo Humanoid · 16:9
Archive

Deepfake & CGI

AI facial manipulation. How generative models reconstruct and falsify human faces. The ethics of synthetic identity.

AIFace Swap
Deepfake & CGI · 16:9
Archive

Reality & Fiction

Graphic novel exploring the boundary between lived experience and constructed narrative.

ComicsNarrative
Reality & Fiction · 16:9
Archive

VR & Heritage

Paris-Saclay research. Immersive reconstruction of historical sites via photogrammetry and spatial computing.

VRPhotogrammetry
VR & Heritage · 16:9