📽️ Learn how to master AI-generated video production with Leonardo.Ai.
This guide covers everything from creating Motion clips to advanced strategies for high-impact videos.
Read the full guide here 👉 https://bit.ly/3xW6xG1
Luma AI just released their first video model DREAM MACHINE, so I decided to take it for a spin and try to push its limits, by reimagining and bringing to life "Meet Your Maker", a teaser trailer for one of my latest stories and world-building experiments, heavily influenced by 80s and 90s Japanese anime and sci-fi. I am completely blown away by the results.
TOOLS USED
Images: Midjourney 6.0 + Magnific AI + DaVinci Resolve
Video: Luma's Dream Machine
Editing: CapCut
Sound Design: Adobe Audition + ElevenLabs + Capcut
#Luma#ai#capcut#magnific#adobe#sora#dreammachine#generative#filmmaking#storytelling
Creative Director - AI Consultant and Educator - Technical Artist
This is a Runway Gen 3 test + a test on my script skills on the go.
The rules:
- 3 generations per prompt, NO REROLL
- Whatever I get, I'll work with, no editing.
- Don't overthink it. The first idea goes out.
The process:
- It's all text to video, hence the lack of consistency with the character.
- The prompt makes the style easily controllable.
- I made the story on the go. I saw one astronaut video and decided that would be my theme. That was the initial idea in my head, and I started writing prompts and the story as I created it simultaneously.
- Generated about 150 shots, picking the best of 3 options. It doesn't matter if it was not perfect.
- Same approach for audio VO narration. I wrote the script for it in 1 take, NO EDITING, and the first result that came out of Runway audio is what I used.
- NO color treatment.
- Simple upscale and interpolation on Topaz.
Less than 4 hours were spent on the whole process.
Hope you enjoy it.
PS: yes. there is a THOUSAND things that could be improved and done better here, that's not the point
#ai#aivideo#aistorytelling#runway#gen3
AI Video Creation Workflow: Full Control at Your Fingertips
Replikant to Runway
Want to have full control over your AI-generated videos? Here's a workflow I've been experimenting with:
-Make your scenes in Replikant
-Load output in Runway Gen 3 (video-to-video)
-Apply realistic style transfer
Key benefits:
- Maintain structural integrity of scenes
- Full crontrol over actors, environment, lighting, script etc.
- Easy scene creation and control
- Avoid copyright infringement on source video
Note: The Gen3 output can stll be hit or miss, and finding the right style is crucial. But the foundation for creative control is there!
Have you tried this workflow? What's your experience with AI video generation? Let's discuss in the comments!
#Replikant#3DAnimation#Runway#Gen3#AIVideo#ContentCreation#TechInnovation#DigitalCreativity
This is a Runway Gen 3 test + a test on my script skills on the go.
The rules:
- 3 generations per prompt, NO REROLL
- Whatever I get, I'll work with, no editing.
- Don't overthink it. The first idea goes out.
The process:
- It's all text to video, hence the lack of consistency with the character.
- The prompt makes the style easily controllable.
- I made the story on the go. I saw one astronaut video and decided that would be my theme. That was the initial idea in my head, and I started writing prompts and the story as I created it simultaneously.
- Generated about 150 shots, picking the best of 3 options. It doesn't matter if it was not perfect.
- Same approach for audio VO narration. I wrote the script for it in 1 take, NO EDITING, and the first result that came out of Runway audio is what I used.
- NO color treatment.
- Simple upscale and interpolation on Topaz.
Less than 4 hours were spent on the whole process.
Hope you enjoy it.
PS: yes. there is a THOUSAND things that could be improved and done better here, that's not the point
#ai#aivideo#aistorytelling#runway#gen3
Check out Roop – a game-changer in video editing! 🛠️ One-click face swaps using just a single image. Made a short tutorial to show how easy it is. Dive into the future of deepfakes. #AI#Deepfake#TechInnovation#Roop#Disruption#Gigabai
AI-generated videos with #Google’s Veo🤖🎬 Here are the top 3 highlights:
🌟 Veo: Google’s most capable video generation model to date! It creates high-quality, 1080p videos that can go beyond a minute, capturing the nuance and tone of prompts with unprecedented creative control. 🎨
🎥 Veo understands complex prompts and combines them with relevant visual references to generate coherent scenes. It accurately interprets natural language and visual semantics, rendering intricate details within complex scenes. 🌠
🎬 Veo offers advanced controls for filmmaking, such as masked editing, image input conditioning, and the ability to extend video clips to 60 seconds and beyond. It maintains visual consistency across frames, keeping characters, objects, and styles in place. 🎥✨
Stay tuned for more updates as Veo's capabilities become available through VideoFX and other products! 📣 #AIVideoGeneration#Veohttps://lnkd.in/ezDuMsHs
🚀🎥 Ready to revolutionize your video post-production? 🌟✨ Check out the top AI features that are changing the game—automated editing, smart color correction, and more! Don’t miss this!
#VideoPostProduction#AIInEditing#EditingTips
This announcement by Lightricks went a bit under the radar (especially compared with OpenAI Sora's announcement last week).
At the end of the day, you would need a proper studio to handle your video creation, from story boarding to clips editing.
Check out the cool video of LTX studio in the comment
It’s time to reimagine the way we tell stories.
Lightricks is proud to present LTX Studio, our first all-in-one AI-video storytelling platform. With LTX Studio, we're transforming the entire video production process into a seamless, intuitive experience.
LTX Studio combines existing models with our very own proprietary models to create a comprehensive tool that elevates AI storytelling capabilities to include script writing, camera control, character consistency, and editing, that give you control over every aspect of your story.
Read the full article on TechCrunch (link in the first comment).
Did you know that video editors and producers can spend up to 40% of their time organizing footage and transcripts? With the rise of AI tools in the industry, tasks like these are being automated, helping editors and producers focus on creative decisions and cutting down production time. Efficiency is becoming a key driver in modern post-production workflows. #AIFuture#VideoProduction#PostProduction
Space Cat looking cute and cool!