Generative video continues to be one of the fastest-moving areas in AI, and at CES this year, Kling AI made a strong case that it’s no longer just an experimental playground. The company revealed major user growth milestones while demonstrating new model capabilities that point toward a more complete, end-to-end AI video production workflow.

One of the biggest attention-grabbers from Kling AI as of late has been its motion control feature, powered by its latest Video 2.6 model. The feature has recently gone viral on social media, allowing users to generate short AI videos by combining a single photo with a reference motion clip (such as a dance or facial expression) in about a minute. The results have been widely shared online, with trends like “dancing puppies” helping push the app to the number-one download spot in multiple markets.

According to a J.P. Morgan report, this surge in popularity helped Kling AI become the most downloaded app in four countries, including South Korea and Turkey, while also ranking in the top ten across ten additional markets. The same motion control feature was demoed live at Kling AI’s CES booth, where attendees could see how quickly static images could be transformed into expressive, animated clips.

A Unified Approach to AI Video Creation

Beyond viral features, Kling AI used CES to highlight its longer-term vision for AI video creation. The company showcased its O1 model, which it describes as the industry’s first unified multimodal video model that combines generation, editing, and understanding into a single system.

One of the most notable capabilities of the O1 model is prompt-based post-production editing. Instead of relying on traditional editing software, users can simply type instructions such as “remove bystanders,” “change daytime to dusk,” or “swap the main character’s outfit.” The model understands the visual context of the video and applies these changes directly, covering everything from background elements and lighting to weather, clothing, and time of day.

image2 3428958943958 - 1

Another key highlight is multi-subject consistency, a long-standing challenge in AI video generation. The O1 model is designed to maintain consistent characters, props, and environments across scenes, even as the camera moves, or multiple subjects interact. Kling AI positions this capability as closer to how a human director would oversee continuity throughout a shoot.

The company also demonstrated its Video 2.6 model’s native audio-visual generation, which supports dialogue, ambient sound, and sound effects generated alongside video. This approach aims to reduce the need for separate tools, offering creators a more streamlined, end-to-end workflow.

Rapid Growth in Users and Enterprise Adoption

Alongside its technical demos, Kling AI revealed updated platform metrics that underline how quickly it has scaled. By the end of 2025, the platform had enabled the creation of more than 600 million videos and grown to over 60 million global users. More than 30,000 enterprises and developers are now integrating Kling AI’s APIs into their own products and workflows. Topping this off was US$20 million in revenue in December 2025, giving the company an annualized run rate of more than US$240 million.

That marks a significant jump from figures shared just months earlier. In July, the company reported 45 million users and 20,000 enterprise customers. Today, its enterprise user base spans industries such as advertising, animation, gaming, and film production, with notable clients including Higgsfield, ComfyUI, Fal.ai, and Freepik.

AI Film Production on Display

Kling AI also leaned into creative storytelling at CES, bringing its AI-powered short film “A Very AI Yule Log” to attendees. Directed by Jason Zada, the project was created in collaboration with Secret Level and reimagines the classic holiday fireplace video using generative AI.

Rather than a static loop, the film features more than 600 AI-generated scenes, each around ten seconds long, adding up to nearly two hours of continuous content. Surreal elements appear and disappear around the fireplace, all generated between fixed start and end frames.

image1 3459803490854328 - 2

Speaking during a Kling AI panel titled “How GenAI Is Transforming the Creative Industry,” Zada described how quickly the project came together. He noted that a similar idea attempted a year earlier felt premature, but advances in Kling AI’s models made it possible to produce almost two hours of original visuals and AI -generated music in under two weeks.

From Viral Clips to Production Infrastructure

Taken together, Kling AI’s CES showing highlighted how generative video is moving beyond novelty. Viral social features may be driving downloads, but under the hood, the platform is evolving into a full production toolset aimed at both creators and enterprises.

As AI video models continue to mature, Kling AI’s combination of rapid user growth, unified editing and generation, and real-world creative projects suggests the technology is settling into something more permanent. At CES, it felt less like a glimpse of the future, and more like the early stages of a new production standard taking shape.