Gen-3 Alpha can simulate liquids such as water, paint, oil, honey and molten glass. All with realistic viscosity, physics-based interactivity and caustics.
Can it interact with filmed elements? If not it's just a screen saver. This isn't a sim, it's a recreation of what it was trained on. Let me see an actor get slime poured over his head, then change viscosity and art direct one of the splashes and I'll be impressed.
Kinda funny reading all the salty comments in here on how unrealistic this is. Considering just a few years ago we were watching a disturbing video of Will Smith eating spaghetti and now where are watching stuff like this mimic real life physics, this is definitely a massive leap for Ai content generation. For people to say this looks unrealistic, I'd be curious to see if you can recreate something close to this, then tell me how long it took you. The point is that these images were developed in a matter of minutes if not seconds. And any one of these scenes would take a skilled VFX artist hours, if not weeks to recreate in detail.
I see quite some hate, disrespect, and blaming in this thread. In my mind, accusations of copyright infringement are completely off-topic as we are dealing with a disruptive technology that may well be capable of replacing, or at the least, massively altering, any job role we thought we had finally secured for ourselves (marketing peeps, clap your hands in mourning). Love it or loath it: AI is here to stay, and you better get your act together to freaking cope with it.
Ok looks like liquid too is Done and dusted now let’s move onto big destructions and should be it, bie bie vfx artists too 🫶
Lovely demo and it'll be a wonderful tool in the hands of small studios and single author works (especially students and kids). However, like so many lovely demos, this won't scale to the consistency and level of control that today's directors demand. That said I welcome speed ups in simulation output for every natural effect. Bring it. The faster the sim the faster the iteration and the faster the client can get exactly what they want.
Thanks, I hate it. None of it's possible without the work of real artists, from which you shamelessly stole.
Sometimes people with money and power seem to think they can get away with blatant theft forever.
Simulate or emulate? The difference is pretty important in a lot of circumstances. Impressive either way.
Now this is what i wanted to see, AI mimicking laws of Fluidity <3
Visual Effects and Motion Graphics Artist
3moDespite the ethical concerns surrounding credits and the work used to feed generative AI, I’m uncertain if these simulations are genuinely physics-based, which might explain their lack of control? Could you elaborate on that? I’ve experimented with various prompts, including generating them from a still frame and then feeding them into Runnway. The results are highly unpredictable and inconsistent. This raises the question: is the platform designed for users to subscribe and pay for numerous iterations in the hopes of achieving satisfactory results? While it’s enjoyable to experiment with, it fails to produce reliable simulations. Furthermore, would running 100 or 1000 iterations yield good, usable outputs? After testing other apps and platforms, Runnway appears to have a logical structure with significant potential. However, its current price tag is quite high for what it offers. Why not develop tools that genuinely leverage the expertise of technical directors, visual effects artists, and compositors? I’m sharing realistic expectations and questions rather than simply ranting.