We all know developers want to watch the world burn! Mwahahahaha!

So, last christmas, I was trying to edit an image to show my kid that Santa visited us during the night, and left him some presents. It took some time, but I did it through Qwen Chat. Then I took it one step further, and created a video from an image using Qwen Chat. It was pretty realistic. If my wife struggled to find flaws (and she does Quality Check for a photography studio), my kid wouldn’t find anything either!
My kid was very excited to see the video! A couple of days ago, I started asking myself:
Dude, you have an RTX 4070 with 12GB of VRAM, why not try to do something like that locally? And I did.
So I found InvokeAI, an open-source project, that was apparently acquired by Adobe a few months back. The installation is pretty much straight-forward, and the interface is pretty simple.
I’m not going to go through an elaborate guide for this, I’m bored at this point (might do that in the future), but the image above, was created using the Z-Image Turbo Q8 starter model, which in turn uses Qwen behind the scenes.
Since easter is right around the corner, here are some bunnies hopping around:

See you next time!