[13:53 Fri,17.January 2025 by Thomas Richter] |
Luma AI has released a new video AI, Ray2, which is designed to generate realistic images with natural, coherent movement and understand complex prompts. The Ray2 model is based on Luma&s new multimodal architecture and was trained with ten times the computing power of its predecessor. Ray2 aims to overcome the problems of many previous video AI models and be able to accurately depict fast movements, render image details ultra-realistically, and produce coherent causal sequences of events. This should significantly reduce the amount of "waste" in video generation, making it more productive. The new Ray2 model is now available in Luma&s Dream Machine via text-to-video generation, initially only for paid subscriptions – not for the free version. Further features such as image-to-video, video-to-video, and prompt-based video editing will follow soon. Currently, the 5-second clips (with a resolution of 720p (1280 x 720) and 24 frames per second) cannot be extended. Ray2 will soon also be integrated into the Luma API and can then be integrated into other tools. The numerous demo clips actually look very good – natural movement sequences that are usually quite error-prone in video AIs, such as ballet or fencing, as well as fast, complex tasks like a galloping herd of horses, show no problems. And also classic AI demo motifs like the miniature cat, polar bears with sunglasses, meat cutting, crocheting hands, a dive through the underwater city, flowing honey, a jeep ride, or a flight through an art gallery. And what is almost more important (because the official demos are always cherry-picked, of course) – user-generated clips also confirm the good impression – the (human) movements look natural:
more infos at bei lumalabs.ai deutsche Version dieser Seite: Luma Ray2: Video-KI der nächste Generation kann auch schnelle Bewegungen |