Transframer AI dreams 30-second video from an image

Deepmind: Transframer AI dreams 30-second video from an image

Image: DALL-E 2 prompted by MIXED

Der Artikel kann nur mit aktiviertem JavaScript dargestellt werden. Bitte aktiviere JavaScript in deinem Browser und lade die Seite neu.

Deepmind’s new video AI, Transframer, can handle a whole range of image and video tasks – and dream up 30-second videos from a single frame.

Generative AI systems have moved from research labs to industrial and consumer applications in recent years, kicked off by OpenAI’s large-scale language model GPT-3. Then last April, the company introduced the DALL-E 2 imaging system, which indirectly spawned alternatives such as Midjourney and Stable Diffusion.

Google sister Deepmind is now showing Transframer, an AI model that could offer a glimpse of the next generation of generative AI models.

Deepmind Transframer: A model with many tasks

Deepmind’s Transframer is a visual prediction framework that can solve eight image modeling and processing tasks at once, such as depth estimation, instance segmentation, object recognition or video prediction.

Transframer uses a set of context images with associated annotations such as time stamps or camera viewpoints and processes the query for an image based on these.

Transframer provides a framework for multiple image tasks. | Image: Deepmind

The model processes compressed images using a U-net whose outputs are passed to a DCTransfromer decoder. Specifically, the images are compressed using DCT (discrete cosine transform); DCT is also used in the JPEG compression method. The DCTransformer is specialized on DCT tokens.

Transframer generates new angles and whole videos

In addition to traditional image tasks such as depth estimation and object detection, Transframer is also capable of synthesizing new viewpoints of an object and predicting video trajectories.


In a short tweet, Deepmind shows about six 30-second videos that Transframer dreamed up from a single input image. Despite the low resolution, some consistency can be seen.

Deepmind says the results show that a framework such as Transframer is suitable for challenging image and video modeling tasks. Transframer can also act as a multitasker to solve image and video analysis problems that previously used specialized models, the researchers said.

Sources: Deepmind (Projektseite), Arxiv (Paper)


Leave a Reply

Your email address will not be published.

Back to top