Nova.
The Sentient Interface.
We architected a context-aware AI ecosystem that embeds a living, breathing assistant into VR training scenarios. Powered by Llama for reasoning and Kokoro for hyper-realistic voice synthesis.
<200ms
Response Latency
Context
Awareness Engine
AWS
Cloud Inference
Human
Synthetic Voice
VR is Mute.
Celebrity Cruises (via CyberDyme, USA) needed to scale their staff training. Pre-recorded tutorials were rigid and impersonal. They needed a mentor that could answer questions, adapt to the trainee's mistakes, and speak naturally.
Riad Saad (TopCode) architected "Nova"—not just a chatbot, but a Contextual Awareness Engine. The challenge was integrating heavy LLM inference into a mobile VR headset without destroying framerate.
The Neural Pipeline
We offloaded the brain to the cloud while keeping the senses local.
1. Context Injection
The VR headset doesn't just send audio. It sends a JSON payload of the user's current environment (e.g., "User is holding the fire extinguisher incorrectly").
2. Cognitive Processing
On AWS, the Llama model analyzes the user's question plus the environmental context. It generates a response that is situationally accurate, not just generic text.
3. Neural Synthesis
The text response is piped through Kokoro AI, generating hyper-realistic audio with emotional intonation, which is streamed back to the headset instantly.
"Nova isn't a chatbot. It's a sentient layer that understands what you are looking at, what you are holding, and how to help you succeed."
Celebrity Cruises x TopCode