The Empathic
Simulation.
We engineered a sentient training environment that doesn't just listen—it watches. By fusing Eye-Tracking Biometrics with Voice Emotion AI, we quantify empathy.
7
Emotional Tones
60Hz
Gaze Tracking
1M+
Phonemes Trained
AWS
Secure Backend
Hospitality is Invisible.
Celebrity Cruises (via CyberDyme, USA) faced a unique problem: How do you train "empathy"? Standard roleplay is inconsistent, and video tutorials are passive.
They needed a way to objectively measure the intangible: Eye contact, tone of voice, and micro-expressions. Riad Saad (TopCode) was tasked with architecting a system that could digitize these human signals into actionable data.
The "Behavioral Engine"
We built a feedback loop that trains the subconscious.
1. Gaze Telemetry
Using headset eye-tracking, we measure "Time to Contact." Did the trainee look the guest in the eye? Did they notice the dirty glass on the table? We generate heatmaps of attention.
2. Phoneme & Tone Analysis
We trained a custom PyTorch model on 1M+ voice segments. It doesn't just check what you said, but how you said it—detecting frustration, hesitation, or warmth.
3. Adaptive Scenarios
The AI Guest reacts to your biometrics. If you avoid eye contact, the guest becomes annoyed. If you speak calmly, the guest de-escalates. It's a living simulation.
"This ecosystem was engineered in strategic partnership with CyberDyme (USA), with Riad Saad serving as Lead Architect for the AI & VR implementation."
Full-Cycle Execution by TopCode Founder