When Veils Part 3m 37s
I refined some glitches that annoyed me the first time around and changed my titling to my new concept of using a QR code that provides static information for the credits. I prefer the collaboration aspect of using my last name and AI. The idea sprung up one day when I was looking at a law firm sign.
Film festivals submitted to date:
Rome AI Festival 2026 Rome, Italy
Berlin AI Festival 2026 Berlin, Italy
An expressionistic journey that was initiated with a paint texture on acetate. By allowing the AI generative model to interpret the image, a story emerged. With my direction a coherent and expressive narrative was formed.
All video and sound was created with generative AI models
Video generated by:
Midjourney
Kling
PixVerse
Most of this video was created with Midjourney. Allowing it to randomly choose action with some detailed prompting. Utilizing their expand feature helped with the initial texture to human scenes. Kling was used to stitch some components using 2 key frames. PixVerse was used for lip sync.
Music generated by AIMusic
The music was generated and then heavily re-mastered using their insert and extend features. I was able to add my lyrics at positions that helped the narrative.
Sound effects and voices by ElevenLabs.
I used the AI generative sound effects feature to create folly sounds and voices for my characters. By utilizing the random mouth movements of my video, I "lip read" a suitable response to work with the narrative.
My goal is to approach filmmaking as a solitary studio practice. By leveraging generative AI to bridge the gap between independent creation and blockbuster production value, I maintain unrestricted creative control while ensuring commercial viability. Ultimately, I view this not just as tool-use, but as a collaboration of intelligences.

Comments
Post a Comment