🏁And with that #CVPR2024 has begun Where to catch Wayve speaking this week? 👇 🗓 June 17 Foundation Models for Autonomous Systems Speaker: Alex Kendall 10:40-11:20 Workshop on Autonomous Driving Speaker: Alex Kendall 16:00-16:30 🗓 June 18 Women in Computer Vision Workshop Speakers: Ana-Maria Marcu, Sofía Josefina Lago Dudas and Sasha (Alexandra) Harrison (she/her) 09:55-10:00 SAIAD Speaker: Cameron Tuckerman-Lee 09:10-09:40 Generative World Models Speaker: Gianluca Corrado 09:00-09:40 E2E Autonomous Driving: Wayve Workshop Speakers: Jamie Shotton, Nikhil M., Gianluca Corrado, Oleg Sinavski, Elahe Arani 12:00-18:00 VLADR Speaker: Long Chen 15:00-15:45 Workshop on Data-Driven Autonomous Driving Simulation Speaker: Jamie Shotton 16:00-16:30 To keep up with all Wayve’s activities visit our dedicated CVPR 2024 page 👉https://lnkd.in/efgVuENt
Wayve’s Post
More Relevant Posts
-
After finishing my work on Multiformer (a lightweight multi-task vision model for use in AV perception stacks), I started to investigate taking things to the next level by adding a planning and control stack to train using reinforcement learning (RL). Along the way, I started to get the unshakeable feeling that this form of modular AV stack development is incredibly prone to cascading error, and bound to inherit a collection of weaknesses from the numerous submodules and problem formulations involved. Still, I knew "black box" end-to-end solutions were considered inexcusably uninterpretable in safety-critical applications. Aware of Sutton's "Bitter Lesson," and witnessing the homogenizing force of transformers and sequence modeling problem formulations in natural language and computer vision, I felt that it was probably only a matter of time before this coming wave would wash away complex schematics like the one I had in front of me. As I read on, I realized I wasn't alone in this feeling. Large multimodal models had already been successfully applied in end-to-end RL frameworks like IRIS, and were now making a splash in AVs with Wayve's AV2.0 initiative and GAIA-1, which allows multimodal conditioning of video generation. These are exciting developments, but this can go much further. If GAIA-1 had multimodal output to produce actions and text, it could control a robot and explain the actions it chose. If the black box could talk, it wouldn't be so uninterpretable. In this article, I invite you to join me in discovering how the development of large generalist models stands to reshape the current landscape of autonomous robotics. #autonomousvehicles #robotics #lmm #llm #reinforcementlearning
Navigating the Future
link.medium.com
To view or add a comment, sign in
-
This is something to keep an eye on. The cross-pollination of different disciplines is key to getting value out of a system. It is not always obvious that when you add automation in one area, you have to made the entire system better because you have bottlenecks downstream. (Automation does not always justify the cost as a result.) By taking a systems approach and incorporating all the stakeholders, the rate of adoption can proceed with fewer obstacles and waste. #leanmanufacturing #robotics #systems #synergy #spatialweb
We have launched a new website, NSF-GCR: Community Embedded Robotics, a multidisciplinary effort to leverage methods across communication science, autonomous systems, data science, and physiological sensing to study the interaction of service robots in working communities such as our university campus. #nsfgcr #utexas #goodsystems #atx
NSF-GCR: Community Embedded Robotics
sites.utexas.edu
To view or add a comment, sign in
-
Today, I attended the excellent halftime seminars of Mart Kartašev and Jonathan Styrud at the Division of Robotics, Perception, and Learning at KTH Royal Institute of Technology in Stockholm as their opponent. They and their supervisors Christian Smith and Petter Ögren research learning, planning, and optimization methods for #BehaviorTrees in #robotics. I wish them both an equally successful second half of their #PhD studies. #KTH #BTs #reinforcementlearning #Bayesian #optimization #ABB #WASP WASP – Wallenberg AI, Autonomous Systems and Software Program
To view or add a comment, sign in
-
Check out this fascinating research by Tony McDonald at University of Wisconsin-Madison! Applies to a lot more than adaptive driving of vehicles. This should be the new science of organizational “driving” and innovation (exploration/exploitation) too. It sheds light on how we know, and learn, and act. I learned a lot from the work of Kathleen Eisenhardt, Stephan Haeckel, Nolan, Norton & Co., Donald Sull, Henry Mintzberg (off the top of my head). But where my personal growth has been the greatest is in the belief -> action planning. And I found it in an odd place, golf thanks to Professor Mark Broadie (https://lnkd.in/gH57f8E8). Professor Broadie is a mathematician and expert in sports analytics. His application of data to the intersection of player ability, course, and statistics/probabilities opened up a new frontier for me. I can’t see the world any different now. :) cc: UW E-Business Consortium
Our lab's latest work, a collaboration with Texas A&M and Waymo, is now available online: https://lnkd.in/gN5uWZd8 The work shows how active inference can be used to model adaptive driving behavior and illustrates how drivers resolve uncertainty by balancing epistemic and pragmatic actions.
Resolving uncertainty on the fly: modeling adaptive driving behavior as active inference
frontiersin.org
To view or add a comment, sign in
-
"Dreaming To Be a Great Mechatronics Scientist and Create the Perfect Humanoid ": Sophomore at University of Texas at Dallas : Researcher in Soft Underwater Robotics and Humanoid Robotics
the Future
We're still celebrating our collaboration with NVIDIA Robotics! 🥳 Announced early last week, Apollo will integrate with NVIDIA’s new general-purpose foundation model for robot learning, Project GR00T, to advance AI for general-purpose humanoid robots. In case you missed the news, you can read more about it here: https://lnkd.in/gat_zF9B
To view or add a comment, sign in
-
Graduate Electronic and Electrical Engineer @ Coventry University - Innovating the Future of Technology - Integrating Electronics with Robotics for Future Innovation.
I'm excited to dive deeper into the world of robotics with my Car Robot with Arduino R3. Over the coming months, I'll be tackling several intriguing projects, each designed to push the boundaries of what's possible in robotic technology. Here’s what’s on the horizon: Line Following and Obstacle Navigation: Perfecting precise navigation skills on complex paths. Maze Solver: Developing algorithms for autonomous exploration and mapping. AI Integration for Autonomous Decisions: Utilizing advanced Deep Learning, Self Driving Robotic. In this project I will change the Arduino with raspberry pi.
To view or add a comment, sign in
-
Hey, self-driving, robotics, and LLM (:wink:) community! Can we incorporate Large Language Models (and VLMs, and MLLMs) into the Autonomy Stack? For explanation and reasoning - definitely. But are we able to use the foundation models for Traffic Simulation, Plan Generation, and even onboard Anomaly Detection? Answers to these and other questions can be found in a new inspiring technical talk by Professor Marco Pavone (NVIDIA), named "Rethinking AV Development with AV Foundation Models" (recording: https://lnkd.in/g_vUwqb7 ), that has been recently presented for Nuro engineers. Let us bring scientific knowledge to better everyday life together! #machinelearning #robotics #selfdriving #llm #vlm #techtalks
To view or add a comment, sign in
-
💫Faces of EXA4MIND💫 We want to introduce the people who contribute to achieving our goal: building an extreme data platform that enables advanced data analytics on supercomputers and automated data management. Today we present Antonín Vobecký from Czech Technical University in Prague, and machine learning researcher of EXAMIND. 🗣 “This project will really push the boundaries of how machines can automatically learn from large scale data in autonomous driving". Watch his interview ⤵ #MachinneLearning #AutonomousDriving #Data #MeetTheTeam #EXA4MIND
To view or add a comment, sign in
-
Cobots & Robotics Expert ><> Next Level of Automation · Global Key Account Manager, Universal Robots
Exciting times! ✨ Greg Smith, CEO of Teradyne, was interviewed on CNBC to talk about robotics, AI, and our new collaboration with NVIDIA, which will add the power of accelerated computing to robotics. In the future, I'm sure AI will unlock robot applications that we can't even imagine today 🦾 Watch the 5-minute-interview here: https://lnkd.in/dcSDHjt3
To view or add a comment, sign in
-
Robotics, #autonomousvehicles, and #medicalimaging use cases rely on #computervision and image processing. Develop and learn about real-time, applications and solutions using NVIDIA software: https://nvda.ws/3X6gM1y
Build Real-Time Computer Vision Applications and Solutions with NVIDIA Software
To view or add a comment, sign in
46,440 followers