[14:52 Fri,15.March 2024 by Rudi Schmidts] |
In the demo shown, you simply upload the desired clip in the browser and select the desired target languages from a table. After a short calculation in the Adobe Cloud, the "translated" clips are ready for download. Other AI researchers have also published similar functionality in various papers in the past. In addition, Adobe has not yet implemented "Dubbing and Lip Sync" in any available beta version of (its) Creatve Cloud applications. However, Adobe explicitly emphasizes that it wants to introduce this function "in line with Adobe&s responsible AI approach". The fact that the functions are to become an official feature, on the other hand, seems to be a foregone conclusion - as further details on "Dubbing and Lip Synch" are to follow soon. As with other research projects in this area, it remains to be seen what users can do if the results do not match the desired outcome. Currently, these models still work according to the "take it or leave it" method. If a translation is not successful in terms of content, this can no longer be corrected manually at a later date. However, we could well imagine that Adobe, with its video transcription experience, will also come up with a solution for precisely this problem area. And perhaps we will be able to see the first implementation at NAB in a month&s time. ![]() deutsche Version dieser Seite: Adobe Dubbing and Lip Sync - Lippensynchrone Videotranslation mit KI |
![]() |