The Leddar PixSet dataset visualized using Foxglove! Let's go 🚘 The Leddar PixSet dataset is a comprehensive, publicly available resource for autonomous vehicle (AV) research and development. It includes data from a full AV sensor suite—cameras, lidars, radar, and IMU—featuring unique full-waveform data from the 3D Leddar Pixell lidar sensor. This dataset enhances traditional lidar point cloud data, expanding possibilities for advanced 3D computer vision research. Achieving high-level AV autonomy requires integrating data from various sensors, each with its own strengths and weaknesses. Sensor fusion techniques, which combine data from these diverse sensors, are essential to boost the robustness and performance of AV algorithms. PixSet enables researchers and engineers to develop and test AV software using pre-recorded, multi-sensor data without the need for independent data collection. PixSet comprises 97 sequences, with approximately 29,000 annotated frames, each marked with 3D bounding boxes for tracking and analysis. Collected across diverse Canadian urban and suburban areas, highways, and various lighting and weather conditions, the data offers a realistic range of scenarios for autonomous driving studies. The dataset defines 22 object classes, with each object assigned a unique ID across frames to support tracking algorithm development. Pedestrians are handled as a special case, with variable bounding box sizes to accommodate changes in posture, improving accuracy in both training and inference. Link to the dataset more info in the comments 👇 The Leddar PixSet dataset is made possible from the contributions of Jean-Luc Deziel, Pierre Merriaux, Francis Tremblay , Dave Lessard, Dominique Plourde, Julien Stanguennec, Pierre Goulet, and Pierre Olivier. Merci beaucoup 🇨🇦 🙏
Foxglove
Software Development
San Francisco, CA 10,671 followers
Visualize, debug, and manage multimodal data in one purpose-built platform for robotics and embodied AI development.
About us
Foxglove is pioneering a new era for and embodied AI and robotic development. Our powerful interactive visualization and data management capabilities empowers robotic developers to understand how their robots sense, think, and act in dynamic and unpredictable environments. All with the performance and scalability needed to create autonomy and build better robots, faster.
- Website
-
https://foxglove.dev
External link for Foxglove
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- San Francisco, CA
- Type
- Privately Held
- Specialties
- Multimodal Data Visualization, Multimodal Data Management, Robotics Development, and Autonomous Robotics Development
Products
Foxglove | Robotics Data Visualization and Management
Data Visualization Software
Understand how your robots sense, think, and act through Foxglove's multimodal data visualization and management platform. * Connect directly to your robot or inspect pre-recorded data. * Use interactive visualizations in customizable layouts to quickly understand what your robot is doing. * Overlay 2D and 3D images with lidar, point cloud, and other sensor data incl. annotations and bounding boxes to see what your robot sees. * Display and control robot models in interactive 3D scenes to understand how your robots move through the world. * Analyze everything as you travel through your robots mission and journey with Timeline and events. * Build bespoke panels, convert custom messages, and alias topic names to support your team’s unique development workflows. * Use Foxglove offline, store your data at the edge, then in your cloud or ours —no matter what your connectivity or data constraints are.
Locations
-
Primary
San Francisco, CA, US
Employees at Foxglove
Updates
-
Rather than launching one node at a time, we can leverage ROS 2 launch files to execute and configure multiple nodes with a single command. You can even pull in nodes from other packages to run different processes. 🚀 Check out this tutorial, where we will cover how to use a #ROS 2 launch file to run multiple nodes, configure them, and group them into meaningful namespaces. Link in the comments 👇
-
“As we scale the business, it’s critical for the team to keep focused on the work that’s delivering our Embodied AI; using Foxglove gives us that critical time back”. J’aime Laurenson, Product Lead, Wayve Check out the Foxglove + Wayve case study to learn how Wayve learns faster using Foxglove. 💜 Success story 👉 https://lnkd.in/gZsmHbDi
-
Foxglove reposted this
At Electric Sheep we are taking a different approach to RFMs. Rather then boil the ocean with data and compute... we are leveraging synthetic data to predict common robotic representations then fine-tuning on specific tasks. Checkout my recent talk Foxglove's Actuate 2024 ... to learn more!
-
Foxglove reposted this
Grateful to have wrapped up an incredible experience at IROS 2024 in Abu Dhabi! 🤖 During the conference, Chinmay Samak and I delivered a presentation on how F1Tenth Foundation and AutoDRIVE Ecosystem can come together to promote reproducibility and benchmarking in robotics research through competitions and digital twinning. Here's a glimpse of what we presented: 🧠 Presented an overview of F1TENTH and AutoDRIVE Ecosystem and highlighted their individual as well as combined capabilities. Stressed on the open-source nature, and availability of benchmarks, standards, scaffolded experiments, curated datasets, research and education material, software development kits, simulation tools, hardware testbeds, global competitions, detailed documentation and an international community – all ready to support #reproducibility and #benchmarking in robotics research. 🔁 Discussed the digital twinning of F1TENTH racecars and racetracks using AutoDRIVE Ecosystem, thereby bridging the gap between simulation and reality from both the ends (#real2sim and #sim2real). Described the F1TENTH Sim Racing framework developed using AutoDRIVE Ecosystem, highlighting how we leveraged the best practices like distributed computing, headless and graphical rendering paradigms, and Docker, Inc containerization for dependency management and workload isolation to yield reliable software reproducibility. 📈 Highlighted the importance of #data in reproducibility and benchmarking, and demonstrated the framework’s capability to record and replay various data streams in a time-synchronized fashion along with the ability to automate data analysis and decision making. Presented in-depth analysis of race metrics (lap count and time, collision count, total and adjusted race time, best lap time, etc.) and vehicle KPIs (pose, twist, acceleration, wheel displacement, actuator feedbacks, LIDAR pointcloud, FPV camera frame, etc.) using Foxglove. But the adventure didn't quite end there! Chinmay Samak and I also had the incredible opportunity to serve as the race stewards and co-organize the 21st F1TENTH Autonomous Grand Prix. The event was a thrilling success where innovation and competition collided to push the boundaries of autonomous racing! 🏁 A heartfelt expression of gratitude to our advisor, Dr. Venkat N. Krovi for his continued support. We are truly grateful for the chance to be a part of such a dynamic community. We had the chance to connect with some of the greatest minds in the field from academia as well as industry, discuss our research and potential engagements. Thank you IROS 2024, for an unforgettable experience! 🙌 ARMLab CU-ICAR Clemson University International Center for Automotive Research Clemson Automotive Engineering Clemson University College of Engineering, Computing and Applied Sciences Clemson University IEEE IEEE Robotics and Automation Society #IROS2024 #Robotics #AutonomousSystems #AI #DigitalTwins #AutoDRIVE #F1TENTH #Innovation #Research #PhD
-
+5
-
🚨#Actuate2024 speaker videos and event photos are now available 🚨 It’s been just over a month since #Actuate2024, so we thought it would be a great time to: 1. Share our Founder and CEO’s reflections on the day. 2. Release the recording of each fantastic presenter’s talk. 3. Share some great photos from the event! Links to Adrian Macneil’s thoughts, the presentation recordings, and photos can be found in the comments below 👇 Another huge thank you to all of you [the robotics community], our sponsors, and to all the speakers for making Actuate 2024 incredible. 🔥 Brad Porter CEO & Founder at Cobot - Collaborative Robotics, Inc (co.bot) 🔥 Sergey Levine Co-founder at Physical Intelligence and Professor at UC Berkeley 🔥 Chris Lalancette, ROS 2 Technical Lead at Intrinsic 🔥 Vijay Badrinarayanan, VP of Artificial Intelligence at Wayve 🔥 Steve Macenski, Owner & Chief Navigator at Open Navigation LLC 🔥 Nick Obradovic, Senior Director of Software Engineering at Sanctuary AI 🔥 Kalpana Seshadrinathan, Head of Deep Learning at Skydio 🔥 Katherine Scott, Relations at Open Robotics 🔥 Vibhav A., Co-founder & VP of Software at Saronic Technologies 🔥 Michael Laskey, CTO at Electric Sheep 🔥 David Weikersdorfer, Head of Autonomy at Farm-ng 🔥 Jeremy Steward, Senior Software Engineer at Tangram Vision 🔥 James Kuszmaul, Senior Robotics Software Engineer at Blue River Technology 🔥 Karthik Balaji Keshavamurthi, Motion Planning Engineer at Scythe Robotics 🔥 Kathleen Brandes, CTO & Co-founder at Adagy Robotics 🔥 Rohan Ramakrishna, Senior Staff Engineer, Data Platform at Agility Robotics 🔥 Ryan Cook, Director of Software Engineering at Agility Robotics 🔥 Ilia Baranov, CTO & Co-founder at Polymath Robotics 🔥 Simon Box, CEO at ReSim.ai 🔥 Allison Thackston, Senior Manager Robotics at Blue River Technology 🔥 Rajat Bhageria, CEO & Founder at Chef Robotics Onward to #Actuate2025 🚀
-
+5
-
DRIOD + DepthAnythingV2 + Foxglove. Let's go! We had a lot of fun training the DROID dataset using DepthAnythingV2 models, achieving fine-grained details and enhanced depth accuracy, all visualized through Foxglove. DepthAnythingV2, trained on 595K synthetic labeled images and 62M+ real unlabeled images, is a cutting-edge monocular depth estimation (MDE) model. It surpasses V1 by producing more robust depth predictions through three improvements: utilizing synthetic images, scaling the teacher model, and leveraging large-scale pseudo-labeled real images. Compared to models built on Stable Diffusion, the DepthAnythingV2 models are over 10x faster and more accurate, with parameter scales ranging from 25M to 1.3B, ensuring broad applicability. Fine-tuned with metric depth labels, these models demonstrate strong generalization capabilities, as shown in this visualization example. Additionally, we integrated time-series data for torques and velocities from the 7 DoF Franka Robotics Panda Arm used in the dataset. The DepthAnythingV2 project was made possible by contributions from Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Check out the project and Hugging Face space for more information. Links in the comments. #DataViz #Robotics #Analytics #ComputerVision
Robotics data visualization using Foxglove.
-
Stop by our booth at #ROSCon2024 to see our Isaac Sim extension that helps developers visualize and debug simulation data in Foxglove
🎉 Announced at #ROSCon: Generative AI tools and enhanced simulation workflows are now available for ROS developers. Learn more about "simulation-first" workflows and explore the latest announcements from our robotics ecosystem. 🤖 Read the blog > https://nvda.ws/4ecqE1X #GoROS #NVIDIAJetson
-
Foxglove reposted this
🎉 Announced at #ROSCon: Generative AI tools and enhanced simulation workflows are now available for ROS developers. Learn more about "simulation-first" workflows and explore the latest announcements from our robotics ecosystem. 🤖 Read the blog > https://nvda.ws/4ecqE1X #GoROS #NVIDIAJetson
-
The Foxglove team is out at #ROSCon2024! Stop by our booth to see live demos + enter our raffle for a chance to win an NVIDIA Robotics Jetson Orin Nano