Bring your 3D projects to life in any style imaginable with the Scenario.com #Texture Generator, currently in beta!🔥 Texture Creation Pipeline ⬇ Albedo Generation: - Generate albedo textures from text prompts using Scenario's built-in models or your custom-trained AI models. - Supports all styles, from photorealistic to stylized, abstract to fantastical. - Incorporate reference images to guide the structure and composition. Dynamic PBR Viewer: - Visualize and adjust material properties in real-time with an intuitive interface. Comprehensive PBR Map Export: - One-click generation of complete texture sets (including height, normal, smoothness, metalness, edge, and ambient occlusion maps). - Seamlessly integrate your output with all major 3D software. We're continuously refining our models, parameters, and UI at app.scenario.com! Read more about it: https://lnkd.in/ercw8rFX Scenario #3DArt #GameDevelopment #AI #GenAI #GameAI #GameAsset
Scenario’s Post
More Relevant Posts
-
🛜 Art+Film Director for Adv, Video Games & CGI — AI/ML R&D @SF__UR | Virtual Persona Agency @heyitsnaeomi
1 REEL TO RULE THEM ALL! Combined here 3 AI-Powered workflows, but what exactly is happening? https://lnkd.in/gdqEmjrD Read! 👇🏽 Creating a consistent character was just the beginning. Let’s see how far we can push the boundaries using different AI-powered workflows ive been using lately!~ 1️⃣ + 2️⃣ #AIAnimation is changing forever. With just 2-3 frames, this workflow generates interval frames for animations. 3️⃣ #2Dto3DMesh took the same character and used a different workflow to create a 3D mesh from a 2D image. It’s not production-ready yet, but it’s promising. My next step is to generate and animate the entire body. Imagine having your 3D mesh production-ready starting from just a 2D image! As a 3D designer, this opens up endless possibilities. 4️⃣ + 5️⃣ #AIStyleTransfer — I wanted to push it further. So I transformed my 2D character into a felted doll-like version using various AI tools to keep the character’s features but change the style completely. Both versions tell different stories. I like them both. The goal is to see how transformative and smooth the process can be to offer various choices and art direction. This playground is incredibly exciting! Everything powered by AI. 👩🏽💻 Stay tuned @yank_hee
To view or add a comment, sign in
-
Have you ever stumbled upon something so exciting you just had to share it, or wondered if it was better kept as your secret? Recently, we faced a challenge that pushed us out of our comfort zone. Determined to make the task enjoyable, we dove into AI tools. In the world of Archviz, AI turned what could have been a mundane process into an engaging journey. We built a unique workflow with ComfyUI, integrated Stability AI models, and added the final touches with Magnific AI. The result? A simple SketchUp 3D model screenshot transformed into a stunning, detailed rendering in just a few hours. Inspired by our success, we're developing a tool to bring this powerful workflow to the public. Imagine generating hyper-realistic images from 3D models quickly and effortlessly, with endless visual possibilities.
To view or add a comment, sign in
-
-
What's next for 3D visualization in our AI-driven world?
Have you ever stumbled upon something so exciting you just had to share it, or wondered if it was better kept as your secret? Recently, we faced a challenge that pushed us out of our comfort zone. Determined to make the task enjoyable, we dove into AI tools. In the world of Archviz, AI turned what could have been a mundane process into an engaging journey. We built a unique workflow with ComfyUI, integrated Stability AI models, and added the final touches with Magnific AI. The result? A simple SketchUp 3D model screenshot transformed into a stunning, detailed rendering in just a few hours. Inspired by our success, we're developing a tool to bring this powerful workflow to the public. Imagine generating hyper-realistic images from 3D models quickly and effortlessly, with endless visual possibilities.
To view or add a comment, sign in
-
-
Join us for our upcoming workshop, “PixelSpace: Intro to 3D Generative AI” which will focus on the automation of 3D scene generation from text and images. The workshop will led by creative designer and 3D generative AI researcher Daniel Escobar (@daniel.esco1) co-founder of the @diffusion_architecture, on August 31 – September 1, 2024. Register Now: Tap the🔗 link: https://lnkd.in/eVaPnWYZ This workshop will explore the cutting-edge techniques of multiview diffusion models conditioned on camera paths for 3D scene generation. We will dive into the latest methods for representing 3D scenes and geometry and investigate how current research in 3D generative AI leverages pre-trained image and video models for a 3D generation. Participants will learn how to use text and image inputs to generate scenes, which will then be imported into Unreal Engine for post-production and to create a concept reel. 📑 Topic: PixelSpace: Intro to 3D Generative AI 📅 Date: August 31 – September 1, 2024 🕕 Time: 15:00 - 19:00 GMT ⚒️ Software: Blender, Unreal Engine, Luma 🧑🏼🎓 Total Seats: 50 Seats 🛒 15% Discount for Digital Members. 🏷 Three workshops are offered in Artificial Intelligence Bundle 4.0 with a 25% discount and an additional %15 discount for digital members: https://lnkd.in/e_RframT #artificialintelligence #midjourney #parametricdesign #computationaldesign #architecturestudents #formgeneration #unrealengine #blender
To view or add a comment, sign in
-
-
Join us for our upcoming workshop, “PixelSpace: Intro to 3D Generative AI” which will focus on the automation of 3D scene generation from text and images. The workshop will led by creative designer and 3D generative AI researcher Daniel Escobar (@daniel.esco1) co-founder of the @diffusion_architecture, on August 31 – September 1, 2024. Register Now: Tap the🔗 link: https://lnkd.in/eVaPnWYZ This workshop will explore the cutting-edge techniques of multiview diffusion models conditioned on camera paths for 3D scene generation. We will dive into the latest methods for representing 3D scenes and geometry and investigate how current research in 3D generative AI leverages pre-trained image and video models for a 3D generation. Participants will learn how to use text and image inputs to generate scenes, which will then be imported into Unreal Engine for post-production and to create a concept reel. 📑 Topic: PixelSpace: Intro to 3D Generative AI 📅 Date: August 31 – September 1, 2024 🕕 Time: 15:00 - 19:00 GMT ⚒️ Software: Blender, Unreal Engine, Luma 🧑🏼🎓 Total Seats: 50 Seats 🛒 15% Discount for Digital Members. 🏷 Three workshops are offered in Artificial Intelligence Bundle 4.0 with a 25% discount and an additional %15 discount for digital members: https://lnkd.in/e_RframT #artificialintelligence #midjourney #parametricdesign #computationaldesign #architecturestudents #formgeneration #unrealengine #blender
To view or add a comment, sign in
-
Join us for our upcoming workshop, “PixelSpace: Intro to 3D Generative AI” which will focus on the automation of 3D scene generation from text and images. The workshop will led by creative designer and 3D generative AI researcher Daniel Escobar (@daniel.esco1) co-founder of the @diffusion_architecture, on August 31 – September 1, 2024. Register Now: Tap the🔗 link: https://lnkd.in/eVaPnWYZ This workshop will explore the cutting-edge techniques of multiview diffusion models conditioned on camera paths for 3D scene generation. We will dive into the latest methods for representing 3D scenes and geometry and investigate how current research in 3D generative AI leverages pre-trained image and video models for a 3D generation. Participants will learn how to use text and image inputs to generate scenes, which will then be imported into Unreal Engine for post-production and to create a concept reel. 📑 Topic: PixelSpace: Intro to 3D Generative AI 📅 Date: August 31 – September 1, 2024 🕕 Time: 15:00 - 19:00 GMT ⚒️ Software: Blender, Unreal Engine, Luma 🧑🏼🎓 Total Seats: 50 Seats 🛒 15% Discount for Digital Members. 🏷 Three workshops are offered in Artificial Intelligence Bundle 4.0 with a 25% discount and an additional %15 discount for digital members: https://lnkd.in/e_RframT #artificialintelligence #midjourney #parametricdesign #computationaldesign #architecturestudents #formgeneration #unrealengine #blender
To view or add a comment, sign in
-
We usually create animations, storytelling and cool stuff. This one is one of the cooler ones.. We started our journey as an #archviz studio. This hits hard. In a matter of a couple of hours, we managed to create something that usually might take days. #ohman #doom #thisisawesomeandthefutureofstuff
Have you ever stumbled upon something so exciting you just had to share it, or wondered if it was better kept as your secret? Recently, we faced a challenge that pushed us out of our comfort zone. Determined to make the task enjoyable, we dove into AI tools. In the world of Archviz, AI turned what could have been a mundane process into an engaging journey. We built a unique workflow with ComfyUI, integrated Stability AI models, and added the final touches with Magnific AI. The result? A simple SketchUp 3D model screenshot transformed into a stunning, detailed rendering in just a few hours. Inspired by our success, we're developing a tool to bring this powerful workflow to the public. Imagine generating hyper-realistic images from 3D models quickly and effortlessly, with endless visual possibilities.
To view or add a comment, sign in
-
-
Last night I had a dream that I created a '3D Search Engine Scanner'.. or a machine that hooks up to a monitor and allows you to scan an object and visually map out the location of various components or shapes within your object.. In my dream, my proof of concept was to scan a can of alphabet soup and show a 3D mapping of the location of every 'P' in the can.. I know lol.. very strange. I woke up and thought it would be fun to experiment with krea.ai and Vizcom to bring my nonsense device to life! This was a 30-ish minute exercise - I started with quick sketching to see if I might come up with a more interesting form than what I saw in my dream, but ultimately decided to stick with the original. Then, I just moved back and forth between Krea and Vizcom trying different methods to see which program better achieved my desired outcome. In terms of AI, I am good with general tasks, but now I want to learn how to use these tools in super specific ways within the context of product design. So pardon my lack of sophistication as I stumble through this learning process 😊 If you are in product design and using any of these programs, even just occasionally, I'd love to connect and hear about your experience so far! #productdesign #aidesign #krea #vizcom #ai #aicccreators #industrialdesign #letsconnect
To view or add a comment, sign in
-
👨🎨 𝐅𝐫𝐨𝐦 𝐚 𝐃𝐫𝐚𝐰𝐧 𝐒𝐤𝐞𝐭𝐜𝐡 𝐚𝐧𝐝 𝐚 𝐏𝐫𝐨𝐦𝐩𝐭 𝐭𝐨 𝐚 𝐅𝐮𝐥𝐥 3𝐃 𝐑𝐞𝐧𝐝𝐞𝐫 𝐏𝐥𝐚𝐲𝐚𝐛𝐥𝐞 𝐆𝐚𝐦𝐞 🎮 𝘛𝘩𝘦 𝘎𝘦𝘯𝘦𝘳𝘢𝘵𝘪𝘷𝘦 𝘮𝘰𝘥𝘦𝘭𝘴 𝘸𝘪𝘭𝘭 𝘳𝘦𝘷𝘰𝘭𝘶𝘵𝘪𝘰𝘯𝘪𝘻𝘦 𝘵𝘩𝘦 𝘝𝘪𝘥𝘦𝘰 𝘎𝘢𝘮𝘦 𝘪𝘯𝘥𝘶𝘴𝘵𝘳𝘺 𝐓𝐡𝐞 𝐀𝐛𝐨𝐮𝐭 ► A new deep-learning-based approach for automatically generating interactive and playable 3D game scenes from a user's prompt, such as a hand-drawn sketch. 𝐓𝐡𝐞 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 & 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧 ► In Generative AI, the generation of high-quality 3D scenes in an open-world videogame scene remains largely unexplored. ► To address this, we propose the Sketch2Scene Pipeline, a pipeline that overcomes the lack of 3D scenes data to generate open-world outdoor scenes using a diffusion model, a hand-drawn sketch from the user, and, optionally, a text prompt that accompanies the drawing. 𝐓𝐡𝐞 𝐆𝐨𝐚𝐥 𝘋𝘦𝘷𝘦𝘭𝘰𝘱 𝘢 𝘱𝘪𝘱𝘦𝘭𝘪𝘯𝘦 (𝘚𝘬𝘦𝘵𝘤𝘩2𝘚𝘤𝘦𝘯𝘦) 𝘵𝘩𝘢𝘵 𝘨𝘦𝘯𝘦𝘳𝘢𝘵𝘦𝘴 3𝘋 𝘨𝘢𝘮𝘦 𝘴𝘤𝘦𝘯𝘦𝘴 𝘣𝘺 𝘧𝘪𝘳𝘴𝘵 𝘤𝘳𝘦𝘢𝘵𝘪𝘯𝘨 𝘢 2𝘋 𝘪𝘮𝘢𝘨𝘦, 𝘵𝘩𝘦𝘯 𝘶𝘴𝘪𝘯𝘨 𝘵𝘩𝘢𝘵 𝘪𝘮𝘢𝘨𝘦 𝘵𝘰 𝘤𝘳𝘦𝘢𝘵𝘦 𝘢 𝘭𝘢𝘺𝘰𝘶𝘵 𝘮𝘢𝘱, 𝘢𝘯𝘥 𝘧𝘪𝘯𝘢𝘭𝘭𝘺 𝘶𝘴𝘪𝘯𝘨 𝘵𝘩𝘢𝘵 𝘮𝘢𝘱 𝘵𝘰 𝘤𝘳𝘦𝘢𝘵𝘦 𝘢 𝘱𝘭𝘢𝘺𝘢𝘣𝘭𝘦 3𝘋 𝘴𝘤𝘦𝘯𝘦 𝘪𝘯 𝘢 𝘨𝘢𝘮𝘦 𝘦𝘯𝘨𝘪𝘯𝘦 𝘭𝘪𝘬𝘦 𝘜𝘯𝘪𝘵𝘺 𝘰𝘳 𝘉𝘭𝘦𝘯𝘥𝘦𝘳. 𝐓𝐡𝐞 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡 1️⃣ A user draws a simple sketch of a scene they want to create, and writes a brief description of what they want to see. 2️⃣ A diffusion model uses this sketch and description to generate a 2D image of the scene, with the user's layout in mind thanks to ControlNet. 3️⃣ The program then extracts the terrain (or background) of the scene from this 2D image using a fine-tuned LoRA model of SDXL-Inpaint. 4️⃣ Next, the program tries to understand the scene in 3D, breaking it down into three main parts: the terrain, the textures and colors of the terrain, and the objects in the scene. 5️⃣ The program uses this understanding to create a 3D model of the terrain, with realistic textures and colors. 6️⃣ It then adds objects to the scene, such as buildings and trees, using a combination of pre-made models (using retrieval strategies from the Objaverse dataset) and generated ones (using generation AI models, such as LRM). 7️⃣ Finally, the program puts all the pieces together to create a complete 3D scene, which can be explored and interacted with in a game engine like Unity. 𝐓𝐡𝐞 𝐂𝐨𝐧𝐜𝐥𝐮𝐬𝐢𝐨𝐧 ► The Sketch2Scene approach is a powerful tool that can turn simple sketches and text prompts into high-quality, interactive 3D scenes. ► While it's not perfect and has some limitations. such as error accumulation, the creators are working to improve it and make it even better in the future. --- Here's the original Paper: https://lnkd.in/d_QbS8dX I'll let the Paper Highlighted down here 👇 𝐒𝐡𝐚𝐫𝐞 𝐲𝐨𝐮𝐫 𝐭𝐡𝐨𝐮𝐠𝐡𝐭𝐬 ‼️ #Sketch2Scene #ArtificialIntelligence #MachineLearning #ComputerVision #GameDevelopment
To view or add a comment, sign in
-
Inspiring concept from AI.Architecture Studio, instructor of the upcoming member-only workshop ”AI-Driven Architectural Design” on September 28 – 29, 2024. Register Now: Tap the🔗 link: https://lnkd.in/ehUaxEEE The workshop aims to showcase the practical application of AI tools in the architectural design process. We will start with the latest AI technologies, concepts, and applications. Showcasing tools like Midjourney, Stable Diffusion, Comfy UI, Runway, and Luma AI. We will demonstrate architecture-focused approaches and shortcut methods in each tool. The workshop includes techniques for effective prompting, image-to-image methods like transforming sketches and 3D models into detailed renders, and image-to-video animation. 📑 Topic: AI-Driven Architectural Design 📅 Date: September 28 – 29, 2024 🕕 Time: 14:00 - 18:00 GMT ⚒️ Software: Midjourney, Stable Diffusion, and ControlNet, ComfyUI, KREA, Magnific Ai, RunwayML, Luma AI 🛒 Free for Digital Members. #artificialintelligence #midjourney #parametricdesign #computationaldesign #architecturestudents #formgeneration #stablediffusion #comfyui #lumaai #runwayml
To view or add a comment, sign in
-