Meta's Habitat 3.0 takes the next step towards socially intelligent robots. This simulator enables collaboration between humans and robots in home-like settings, with a focus on training navigation agents more efficiently through the Habitat Synthetic Scenes Dataset. In addition, FAIR introduces HomeRobot, an affordable platform for home robots that can operate in both simulated and real-world environments. These advancements aim to accelerate research in socially intelligent AI agents while ensuring safety and scalability. Learn more about Habitat 3.0 and the possibilities of collaboration, communication and future state prediction: https://bit.ly/49zLpU1
Habitat 3.0 is a simulator designed to develop social embodied agents that assist and cooperate with humans. It supports both humanoid avatars and robots, thus allowing the study of collaborative human robot tasks in home like environments. To ensure generalization of our AI models, our simulator offers a wide array of human poses and appearances with multiple gender representations and body shapes. Furthermore, our simulator supports a broad spectrum of actions, ranging from simple behaviors like walking and waving to more complex behaviors like interacting with objects. Diversity extends to the scenes as well as Habitat 3.0 utilizes the Habitat Synthetic Scenes data set, a collection of over 200 scenes and over 18,000 objects. Another pivotal feature of Habitat 3.0 is a Human in the Loop tool facilitating interactive evaluation of AI agents. Through this tool, humans can collaborate with autonomous robots using mouse and keyboard or a virtual reality interface. Aiming at reproducible and standardized benchmarking Habitat 3.0 presents 2 Collaborative Human Robot Tasks. The first task, called social navigation, involves the robot finding and following a humanoid avatar while maintaining a safe distance. Think of scenarios like having a video call while moving around your home. The second task, social rearrangement, involves the robot working collaboratively with the humanoid avatar to rearrange a set of objects from their initial placement to their desired location. The agents must coordinate to achieve this goal together as efficiently as possible. We conduct an in-depth study of different baselines on both tasks. Here we show one of our end to end learned policies on the social navigation task. The robot adeptly navigates an unseen environment, locating and following the humanoid avatar while maintaining a safe distance. Notice that the robot will yield space to the avatar, allowing their unobstructed movement. Here's an episode of the Social rearrangement task, where a learned policy efficiently splits the task between the robot and the humanoid avatar, improving efficiency over the avatar operating alone. These findings also extend to our human in the Loop study, where learned robot policies enhance the human sufficiency. Please refer to the detailed results sections in the paper.
Vlogger at Meta
11moplasse guys support my page and reels than kyou♥️🥰🙏