Recently, videos have been posted on the Internet twitter.com/commonstyle/status/1682874851823058946, which were created with a - presumably - new function in the AI video generator Runway Gen2. In June, the second generation of the model publicly available - an Image-2-Video function was announced, but as far as we know it creates moving variations on the shown motif. It seems that the community has now discovered a new way to get clips from single images.
The special thing about these new animations is that they seem to be generated without any prompt at all. You simply input a single image, for example one from an AI image generator like Midjourney or Stable Diffusion, and Runway Gen2 independently recognizes which parts of the image could move in a meaningful way. This requires some object recognition and classification, such as Runway "knowing" that people are turning their heads, or moving arms, hair blowing in the wind, and the like. But the camera view also moves.
In this way, individual images can be animated into short clips, each for a maximum of 4 seconds (in the basic account). This works with drawings as well as with photo-realistic shots, but certainly not always and equally well. The more complex the image composition, the stranger the result; human motion sequences do not (yet) succeed convincingly. A zombie movie could probably be generated best this way.
Theoretically, the 4-second limit can be circumvented by re-entering the last frame of an animated sequence as the start frame for the next animation, but you will probably remain trapped in a loop in terms of content. In the absence of a prompt, you have no control over the animation. But like any new gimmick, this one can certainly be used creatively in some way.
The following video shows how easy it is to create such animations.