Meet Phoenix, our frontier 
Replica Model

Unparalleled realism, powered by unparalleled research

Phoenix-2
Our groundbreaking Phoenix-2 model generates exceptionally realistic digital twins.

Developed in-house by our team, it leverages new audio and text-driven 3D models, integrating volumetric rendering techniques offered by 3D Gaussian Splatting, with 2D Generative Adversarial Networks (GANs), to create lifelike replicas from short video clips of a user.

Powering AI video products

The Phoenix model powers our products via the replica API.

VIDEO GENERATION

Video Generation

Give users the ability to generate videos from a script with AI digital twins.
Conversational Video Interface

Conversational Video

Give users the ability to have real-time conversations with AI digital twins.

Create hyper-realistic replicas with minimal footage

A user submits 2 minutes of training footage, and consents to the creation their digital replica.
The model analyzes the inflections of how a user speaks and creates a voice and video model.
Once the model is trained, a user gets their digital replica and can start generating videos.

Access a world-class machine learning team

Our engineering teams have been hand selected from top universities and leading companies like Amazon, Descript, Google, and Apple.

Research initiatives

The team is at the forefront of AI video research and pushes model updates every two weeks based on the latest research and customer needs.

What digital twin 
experience will you build?