Turning AI into a Living Being
An AI Lifeform continuously perceives the real world, understands people and context, forms memory and emotion, and expresses itself through body, gaze, and voice.
Low-latency perception and real-time control. It drives instant reactions, smooth motion, and always-on attention—before the cloud responds.
Full-modal understanding, reasoning, memory, and personality. It builds context, learns preferences, and plans responses across voice, motion, and expression.
Eyes, microphones, touch, and motion sense continuously on-device, capturing who is here and what's happening — in real time.
The cloud fuses vision with audio cues to build scene and social context, while the edge maintains attention — gaze tracking, speaker direction, and interruption cues.
The cloud chooses intent and behavior using character, memory, and social dynamics — while the edge enforces timing and safety constraints.
Eyes move first, body follows. Motion, voice, and belly light deliver responses with lifelike timing — no pre-scripted loops.
A unified loop across vision, conversation, emotion, memory, self-model, and motion.
Cloud vision reasoning that turns what Pophie sees into meaning—and action.
She interprets scenes and context, not just objects or faces.
She infers attention, intent, and mood from gaze, posture, and situation.
What she sees naturally shapes dialogue and behavior—so responses feel timely and lifelike.
Powered by continuous real-time visual reasoning, not trigger-based detection.
You look at her → she notices. She follows your gaze and keeps eye contact naturally.
You keep looking → she reacts emotionally. You wave → she starts the conversation.
Actively scans surroundings. Understands who is present and what's happening.
A truly usable home conversation system.
Look and speak naturally. Interrupt anytime.
Knows who is speaking, who is being spoken to, and won't interrupt human-to-human conversations.
Remembers each person separately. Asks better questions over time.
She knows "who she is".
Knows what she can and cannot do. Has boundaries and emotions.
Can say "no", get annoyed, or feel proud based on interaction history.
Life-like expression is not about "looking like", but "feeling right".
Eye-first movement. Micro-motions of iris & highlights. No UI, no icons, no screens.
Multi-DOF coordinated motion. Never single-axis mechanical movement.
Pocket Glow synced with speech & emotion. Color as emotional language.
Short-term clarity, long-term essence. Remembers what matters, forgets what doesn't.
Plays, hums, explores when idle. She has a life of her own.
Every reaction is a coordinated full-body performance—driven by the life simulation system.
Motion is never single-axis—eyes, head, body, and timing move as one.
Gaze moves first, body follows, then gaze stabilizes—like a real being.
Eyes stay pure—no UI overlays, no icons, no "display face."
Hands, ears, and full-body rotation enable rich emotional language.
Constant warmth adds a subtle "living" comfort when you hold her.
Micro gaze dynamics, eyelid-follow, and subtle iris motion create true presence.
Speech-synced light replaces a mouth—and color becomes emotion.
Power on/off, volume, and settings happen through natural interaction.
Wake her by touch or by name. Ask for battery and connectivity anytime.
Pophie is not a robot that reacts.
She is a lifeform that perceives,
understands, and responds
with presence, emotion, and intention.