📢 𝙉𝙚𝙬 𝙋𝙖𝙥𝙚𝙧 𝙥𝙪𝙗𝙡𝙞𝙨𝙝𝙚𝙙 📢 🚀 Excited to announce the publication of our latest paper titled "LiMOX - A Point Cloud Lidar Model Toolbox Based on NVIDIA OptiX Ray Tracing Engine"! 📝 We are thrilled to share that our team at Virtual Vehicle Research GmbH, in collaboration with JOANNEUM RESEARCH DIGITAL – Digital Twin Lab and Infineon Technologies, has developed 𝐋𝐢𝐌𝐎𝐗, 𝐚 𝐠𝐫𝐨𝐮𝐧𝐝𝐛𝐫𝐞𝐚𝐤𝐢𝐧𝐠 𝐩𝐨𝐢𝐧𝐭 𝐜𝐥𝐨𝐮𝐝 𝐥𝐢𝐝𝐚𝐫 𝐦𝐨𝐝𝐞𝐥 𝐭𝐨𝐨𝐥𝐛𝐨𝐱. 🔎 LiMOX utilizes the power of ray tracing and the NVIDIA OptiX engine to generate precise point clouds according to material classes. The reflectivity-driven range handling, using infrared material measurements, has a high impact on the generated point clouds. The simulation can be used stand-alone or in modular co-simulation applications via Open-Simulation-Interface (OSI). 📚 Published in MDPI, this paper represents a 𝐬𝐢𝐠𝐧𝐢𝐟𝐢𝐜𝐚𝐧𝐭 𝐦𝐢𝐥𝐞𝐬𝐭𝐨𝐧𝐞 in advancing lidar simulation capabilities, paving the way for more accurate and realistic virtual environments. 👏 Huge thanks to our dedicated Authors Relindis Rott, David Ritter, Oliver Nikolic, Stefan Ladstätter, and Marcus Hennecke for their invaluable contributions! ❗ Read the full paper here or use the QR-Code in the picture: https://lnkd.in/d4agv3aM #LiMOX #lidar #simulation #raytracing #NVIDIAOptiX #MDPI #research #automatedvehicle #AD #publication
EUREKA Test.EPS’ Post
More Relevant Posts
-
Working with LiDAR companies in California with my colleague Majid Ebnali Heidari,Ph.D., It's actually mind-blowing to witness the pace of innovation being achieved in the ADAS industry. With so many key players in the industry, how do these organizations keep up with the extreme pace of innovation to keep up in the competitive landscape? How can they accelerate product development cycles without sacrificing validation? How can they get to market quickly without sacrificing safety standards? While we stand on the cusp of a technological revolution in supercomputing and AI the answer is becoming much clearer, Simulation. Here’s how Ansys comprehensive simulation solutions are supporting these cutting-edge developments : 📈 LiDAR Hardware Design: - Photonic Design: Achieve superior performance with optimized photonic components. - Optical Design Optimization & Tolerancing: Ensure accuracy and reliability in your optical systems. - Optomechanical Packaging: Integrate optical and mechanical components seamlessly. - Stray Light Analysis: Minimize unwanted light and improve sensor accuracy. - Structural & Thermal Analysis: Validate and enhance the durability and performance of your designs. ⚙️ Simulated System Performance: - Time-of-Flight & System Efficiency: Optimize LiDAR system efficiency and accuracy. - Environment Integration & System Impact: Simulate real-world conditions to validate system robustness. 🚗 Sensor-to-Vehicle Integration: - LiDAR-Vehicle Placement Optimization: Ensure optimal sensor placement for maximum coverage and performance. - Advanced Scenario Validation: Test and validate your LiDAR systems in complex driving scenarios through virtual prototype driving simulation. 💡 Compliance and Beyond: - Achieve ISO 26262 and 21434 compliance effortlessly. - Consolidate your simulation tech stack with Ansys, covering all your engineering needs under one roof. Through simulation, LiDAR companies can reduce development time, enhance product reliability, and bring innovative solutions to market faster. With our NVidia partnership, Ansys simulations are only becoming faster and more powerful. If you're interested in a deeper dive, shoot me a DM and we can share how we can support your LiDAR projects. #LiDAR #Simulation #ADAS #AutonomousVehicles #EngineeringExcellence #Ansys #Optics #Photonics
To view or add a comment, sign in
-
LiDAR, Medical Imaging, Visual Inspection, Deep Learning, Generative AI, and More! Product Application Engineer at The MathWorks
Thrilled to share my poster "3D Vehicle Detection using Flash Lidar Imagery" from the NVIDIA GTC conference! 🚀 The presentation was a fantastic success, sparking engaging discussions on the forefront of Lidar technology and deep learning. Our poster highlights the innovative use of MATLAB and the Lidar Toolbox for designing a cutting-edge 3D Vehicle Detection system. From preprocessing and labeling data with precision, training and testing the YOLOX model for accuracy, to augmenting the model with tracking for robustness - each step was meticulously crafted. We then converted detection and tracking results into 3D, showcasing the real potential of Lidar in understanding the world around us. The final achievement? Deploying this sophisticated system on an NVIDIA Jetson Orin Nano board, demonstrating the feasibility of running advanced AI models on edge devices in real-time. 🖥️ #NVIDIAGTC #Lidar #DeepLearning #MATLAB #YOLOX #NVIDIAJetson #EdgeComputing #3DVehicleDetection #LidarToolbox Link to the publication: https://lnkd.in/e-vqKtTv
To view or add a comment, sign in
-
Tim and Brent from #nvidia 's Public Sector team has done some of the most amazing work I've seen in #geospatial. This technique below lets you scan in your environment using live drone footage to create a live digital twin of your environment. We have used this to identify objects of interest like wildfires. Because they have scanned it into #omniverse a shared metaverse we can have multiple people/agent all work with the live model to respond. #gtc24 | #nvidia | #geoint | #mapping | #dod | #dronelife | #commandandcontrol | #jadc2
Want to go from drone imagery to a geospatial 3D model but it's taking days? Ain't nobody got time for that! Very excited to be presenting with Brent Bartlett at NVIDIA's #gtc24 on Transforming 2D Imagery into 3D Geospatial Tiles With Neural Radiance Fields (https://lnkd.in/gxAgWhwW). This talk will cover a range of methods available for going from #drone capture to 3D visualization, including traditional #photogrammetry and more recent methods like #NeRF and #gaussiansplatting. We'll also cover mesh reconstruction and representation in Cesium #3DTiles, which can be streamed into many applications including NVIDIA's #Omniverse. GTC 2024 is going to be incredible. If you haven't registered yet, you can use the following link to get 25% off: https://lnkd.in/ekcDPNJ9 Jennifer Arnold | Alex Neefus | Kevin Berce | May Casterline
To view or add a comment, sign in
-
Want to go from drone imagery to a geospatial 3D model but it's taking days? Ain't nobody got time for that! Very excited to be presenting with Brent Bartlett at NVIDIA's #gtc24 on Transforming 2D Imagery into 3D Geospatial Tiles With Neural Radiance Fields (https://lnkd.in/gxAgWhwW). This talk will cover a range of methods available for going from #drone capture to 3D visualization, including traditional #photogrammetry and more recent methods like #NeRF and #gaussiansplatting. We'll also cover mesh reconstruction and representation in Cesium #3DTiles, which can be streamed into many applications including NVIDIA's #Omniverse. GTC 2024 is going to be incredible. If you haven't registered yet, you can use the following link to get 25% off: https://lnkd.in/ekcDPNJ9 Jennifer Arnold | Alex Neefus | Kevin Berce | May Casterline
To view or add a comment, sign in
-
Rahul R. is using our #lidar along with NVIDIA Jetson Orin Nano to power the Unitree Robotics GO1 quadruped for a critical application: search and rescue. Our Helios delivers the perception necessary for precise mapping and localization, enabling the GO1 to travel places too dangerous for humans. We are at NVIDIA GTC right now in booth 731, showcasing how this is possible and other key applications for our lidar technologies! Drop by! NVIDIA Robotics NVIDIA AI
Search and Rescue missions in disaster zones can be dangerous for humans. But what if we could use technology to make them safer? Over the past 10 weeks, I programmed a quadruped robot (Unitree Go1) to autonomously explore unknown environments and search for survivors. With facial recognition capabilities, it can navigate through confined spaces, enhancing the effectiveness of search and rescue operations. Here are the main features of the project: 🔌 Powered by the Nvidia Jetson Orin Nano: Executes all computations onboard. 🚀 Custom Autonomous Exploration Algorithm: Integrated with the Nav2 stack for seamless navigation in unfamiliar environments. 🗺 3D Lidar and Infrared Camera-Based Navigation: Employing an Infrared camera and Lidar - Graph-Based SLAM approach with an incremental appearance-based loop closure detector. This system utilizes RTAB Map and ICP Odometry to ensure precise mapping and localization. 👥 Object Detection Model: Trained to identify and locate humans within challenging terrain, displaying their positions on the map for efficient rescue operations using YOLOv8. 👤 Facial Recognition Capabilities: Equipped with Deepface to store and compare faces, streamlining survivor identification and rescue coordination. Details of the project can be found on my portfolio: https://lnkd.in/gszZCa-n Unitree Robotics Intel Corporation Open Navigation LLC Northwestern University IntRoLab RoboSense NVIDIA NVIDIA Robotics Open Robotics Intrinsic #searchandrescue #robotics #ROS2 #OpenRobotics #UnitreeGo1 #RTABMAP #IntelRealsense #3DLidar #OpenCV #Deepface #YOLOv8 #ObjectDetection #autonomousnavigation #autonomy #machinelearning
To view or add a comment, sign in
-
Search and Rescue missions in disaster zones can be dangerous for humans. But what if we could use technology to make them safer? Over the past 10 weeks, I programmed a quadruped robot (Unitree Go1) to autonomously explore unknown environments and search for survivors. With facial recognition capabilities, it can navigate through confined spaces, enhancing the effectiveness of search and rescue operations. Here are the main features of the project: 🔌 Powered by the Nvidia Jetson Orin Nano: Executes all computations onboard. 🚀 Custom Autonomous Exploration Algorithm: Integrated with the Nav2 stack for seamless navigation in unfamiliar environments. 🗺 3D Lidar and Infrared Camera-Based Navigation: Employing an Infrared camera and Lidar - Graph-Based SLAM approach with an incremental appearance-based loop closure detector. This system utilizes RTAB Map and ICP Odometry to ensure precise mapping and localization. 👥 Object Detection Model: Trained to identify and locate humans within challenging terrain, displaying their positions on the map for efficient rescue operations using YOLOv8. 👤 Facial Recognition Capabilities: Equipped with Deepface to store and compare faces, streamlining survivor identification and rescue coordination. Details of the project can be found on my portfolio: https://lnkd.in/gszZCa-n Unitree Robotics Intel Corporation Open Navigation LLC Northwestern University IntRoLab RoboSense NVIDIA NVIDIA Robotics Open Robotics Intrinsic #searchandrescue #robotics #ROS2 #OpenRobotics #UnitreeGo1 #RTABMAP #IntelRealsense #3DLidar #OpenCV #Deepface #YOLOv8 #ObjectDetection #autonomousnavigation #autonomy #machinelearning
To view or add a comment, sign in
-
🌟 Sneak Peek Friday: Research Focus Episode! 📚🔬 In this week's episode of Sneak Peek Fridays, we are excited to highlight a significant milestone in our research efforts. Our co-founder, Dr. Philipp Rosenberger, has contributed to a newly published journal paper titled "Introducing the Double Validation Metric for Radar Sensor Models". This publication is a testament to our ongoing commitment to advancing sensor model validation techniques. The publication is a collaborative effort with our esteemed research partners from Institute of Automotive Engineering, Technische Universität Darmstadt (FZD) and we are proud to announce that it is available as Open Access. This means that everyone can benefit from the insights and findings presented in this groundbreaking work. The paper tackles our core topic at Persival: Credible Perception Sensor Simulation. While simulation models for perception sensors such as lidar, radar, and cameras exist at varying levels of detail, validating their accuracy remains an ongoing research challenge. The authors, led by Lukas Elster, explain the Double Validation Metric (DVM), previously applied to lidar sensor models, and extend it to radar sensor models through the DVM Map introduced here. This new method, demonstrated on real and simulated radar sensor data, provides detailed and accurate validation, reveals previously undetected simulation errors, and offers a more intuitive visualization of results using satellite imagery. The paper is published in Automotive and Engine Technology, a fully open access journal covering all aspects of automotive and commercial vehicle engineering and engine development. The journal provides in-depth articles by expert authors from academia and industry, and serves as an essential source of information for a global audience of automotive engineers. For more information and to read the full paper, visit the following link: https://lnkd.in/eQSTDNqW Join us in exploring the future of simulation quality and enhancing our understanding of perception sensor simulation. Stay tuned for more updates to bring you the latest advancements every Friday! #SneakPeekFridays #ResearchFocus #PersivalGmbH #TUDarmstadt #Collaboration #Innovation #AutomotiveEngineering #Radar #Perception #Sensor #Simulation and #Model #Validation
To view or add a comment, sign in
-
🌟 Friday = an exciting new weekend read! 🌟 This week, we're spotlighting the innovative work of Scantinel Photonics, a global leader in LiDAR sensor technology. Scantinel Photonics has reinforced its position with the introduction of its next-generation Photonic Single Chip, based on standard CMOS technology. This new PIC features a fully integrated, massively parallel detector system for coherent LiDAR. Recently tested at Scantinel, the chip demonstrated a significant improvement in signal-to-noise ratio, about 20dB better than previous solid-state LiDAR scanners. Designed for automotive LiDAR applications, the scanner-detector chip is a fully integrated, automotive-ready device. It includes a photonic chip and a low-noise electronics board. With enhanced SNR, the system has achieved a tenfold reduction in LiDAR power consumption, enabling faster pixel rates. Unlike market systems using proprietary technology or two-mirror scanners, this generation leverages the full advantages of FMCW technology over existing TOF LiDAR systems. The PIC production has been fully transferred to high-volume standard CMOS fabrication, showcasing the advanced maturity of Scantinel’s technology. For more details, click here: https://bit.ly/45YSfB7 #ScantinelPhotonics #LiDAR #PhotonicChips #Innovation #TechNews #AutomotiveTech
To view or add a comment, sign in
-
Visualizing an entire dataset at once, either pre-recorded or live streaming data, facilitates the detection of unexpected behaviors and critical events. For instance, you can easily navigate through the point cloud data, cross-referencing it with camera images and IMU readings, to gain a comprehensive understanding of the captured scene for further post processing. The RVP “A Visual Benchmark in Rome” dataset was collected near the iconic Colosseum of Rome, using a handheld setup, with a person walking around the area to gather comprehensive environmental information. The team used an assembly of two cameras in stereovision –for detailed visual representation of the environment, and an Ouster OS0-128 lidar –for high-resolution 3D mapping and precise motion and orientation. This powerful combination enables them to digitize a variety of scenarios with high accuracy. The dataset was created by Leonardo Brizi, Emanuele Giacomini, Luca Di Giammarino, Simone Ferrari, Omar Salem, Lorenzo De Rebotti, and Giorgio Grisetti of the Robots, Vision, and Perception (RVP) group at the University of Sapienza in Rome. This innovative team specializes in 3D scene reconstruction and mobile robot perception, tackling challenges from SLAM to precise localization. Link to the dataset and more about the project in the comments. 👇 #DataViz #Analytics #Robotics
Robotics Data Visualized using Foxglove
To view or add a comment, sign in
-
Another example of why we us NavVis as one of our fundamental foundation tools for Digital Twins. Reach out if you're interested to learn more about how Digital Twins and BIM can benefit your next project.
We're honored to be mentioned in the recent NVIDIA blog post "Accelerating Data Center Design With Digital Twins", which showcases how #rapidinnovation is made possible using digital twin technologies. The #realitycapture process starts with mobile lidar scanning using the revolutionary NavVis VLX 3 system. From there, Prevu3D was utilized to create an accurate 3D model in NVIDIA Omniverse. This enabled the virtual removal of the existing hardware and rapid design, optimization and installation of the new compute and network infrastructure - finding and fixing potential issues before they even occur! Come see us at booth 531 in the Industrial Digitalization Pavilion at #GTC24 to meet our service delivery partners NavVis and Prevu3D, and see these exciting technologies in action! #nvidiagtc #digitaltwin #digitaltwins #ai #artficialintelligence #syntheticdata #scanning #laserscanning #slam #lidar #lidartechnology #industrial #industrialengineering #industrialai #realitycapture #computervision #nvidiaomniverse
Accelerating Data Center Design With Digital Twins
To view or add a comment, sign in
213 followers