ByteDance's OmniHuman-1 shows just how realistic AI-generated deepfakes are getting
ByteDance has demoed a model that its researchers says creates realistic full-body deepfakes from a single image.
- ByteDance has demoed an AI model that generates lifelike deepfake videos from a single image.
- ByteDance released test videos, including deepfakes of TED Talks and a talking Albert Einstein.
- Tech firms, including Google and Meta, are working on tools to better detect deepfakes.
Researchers at ByteDance, TikTok's parent company, have showcased an AI model that generates full-body deepfake videos from just one image — and the results are scarily impressive.
Unlike some deepfake models that can only animate faces or upper bodies, Bytedance's OmniHuman-1 generates realistic full-body animations that sync gestures and facial expressions with speech or music, the company said.
ByteDance published several dozen test videos, including AI-generated TED Talks and a talking Albert Einstein, to its OmniHuman-lab project page.
The model supports different body proportions and aspect ratios, making the output look more natural, Bytedance researchers said in a paper published Monday that has since caught the attention of the AI community.
"The realism of deepfakes just reached a whole new level with Bytedance's release of OmniHuman-1," said Matt Groh, an assistant professor who specializes in computational social science, in an X post on Tuesday.
OmniHuman-1 is the latest AI model from a Chinese tech company to grab the attention of researchers following the release of DeepSeek's market-shaking R1 model last month.
Venky Balasubramanian, the founder and CEO of tech company Plivo, said in a Tuesday X post, "Another week another Chinese AI model. OmniHuman-1 by Bytedance can create highly realistic human videos using only a single image and an audio track."
ByteDance said its new model, trained on roughly 19,000 hours of human motion data, can create video clips of any length within memory limits and adapt to different input signals.
OmniHuman-1 outperforms previous animation tools in realism and accuracy benchmarks, the researchers said.
Deepfake detection
Deepfakes have become harder to detect as the technology becomes more sophisticated. Google, Meta, and OpenAI have introduced AI watermarking tools, such as SynthID and Meta's Video Seal, to flag synthetic content.
While these tools offer some safeguards, tools are still playing catch-up with the misuse of deepfake technology.
AI-generated videos and voice clones are fueling harassment, fraud, and cyberattacks, with criminals using AI-generated voices to scam victims. US regulators have issued alerts, while US legislation like the TAKE IT DOWN Act aims to tackle deepfake porn.
The World Economic Forum said last month that deepfakes are exposing security flaws and are expected to "create a misinformation and disinformation apocalypse."