TDW's 3D models use Physically-Based Rendering (PBR) materials, that respond to light in a physically-correct manner. Indirect or environment lighting comes from HDRI (High Dynamic Range Image) 'skyboxes'. TDW's lighting model uses a single light source to simulate the sun, for direct lighting. Many of our exterior environments are built using 3D model assets scanned from the real world (rock outcrops, ground surfaces). Users can also convert their own models for use inside TDW using our model conversion tools. We are exploring making this library available for licensing for details please go to this link. In addition, our "full" photorealistic model library contains over 2000 models across 200 object categories. TDW comes with a "core" library of 200+ models. Near-Photoreal Image Rendering Our high-resolution 3D models are very detailed, which is important for photorealism, but at the same time are highly optimized for real-time simulation purposes. To see the Magnebot in action, watch this video. In addition, users can now use their own robot models in TDW, by importing standard URDF robot model descriptor files. The API also includes a wide variety of new interior scenes, populated by interactable objects and optimized for navigation by Magnebot. Arm articulation is driven by an inverse kinematics (IK) system, where the arm will calculate a solution to reach a specified target position or object. The high-level API combines the low-level commands into "actions", such as grasp(target_object) or move_by(distance). The simulation is entirely driven by physics.Īt a low level, the Magnebot is driven by robotics commands such as set_revolute_target(), which will turn a revolute drive. Magnebot's arms have 7 degrees of freedom, with 2 additional DOF coming from its torso that can slide up and down and rotate around its central column. The Magnebot can move around the scene and manipulate objects by picking them up with its "magnet" end-effectors. With version 1.8 of TDW, we introduce a new high-level robotics-like API - Magnebot. The agent must find a small set of objects scattered around the house, pick them up, and transport them to a desired final location.įor further details, please visit this website. In this challenge, the Magnebot acts as an embodied agent and is spawned randomly in a simulated physical home environment. We introduce a visually-guided and physics-driven task-and-motion planning benchmark, which we call the ThreeDWorld Transport Challenge. The benchmark consists of a large dataset of procedurally generated 3D animations, synthesized with TDW, that probes key concepts of core intuitive psychology.įor further details, please visit the AGENT website. GitHubĪGENT: A Benchmark for Core Psychological ReasoningĪ combined team from MIT, the MIT-IBM Watson AI Lab and Harvard University recently released AGENT: A Benchmark for Core Psychological Reasoning. Paper "ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation" TDW is being used on a daily basis in multiple labs, supporting research that sits at the nexus of neuroscience, cognitive science and artificial intelligence. Multiple paradigms for object interaction, capable of generating physically-realistic behavior.A comprehensive, highly extensible and thoroughly documented command and control Python API.Support for multiple modalities - visual rendering with near-photoreal image quality, coupled with superior audio rendering fidelity.A general, flexible design that does not impose constraints on the types of use-cases it can support, nor force any particular metaphor on the user.Researchers write Controllers that send commands to the Build, which executes those commands and returns a broad range of data types representing the state of the virtual world. A TDW simulation consists of two components: a) the Build, a compiled executable running on the Unit圓D Engine, which is responsible for image rendering, audio synthesis and physics simulations and b) the Controller, an external Python interface to communicate with the build.