We present a fully-automatic system that takes a 3D scene and generates
plausible 3D human bodies that are posed naturally in that 3D scene. Given a 3D
scene without people, humans can easily imagine how people could interact with
the scene and the...
Training a neural network is synonymous with learning the values of the
weights. In contrast, we demonstrate that randomly weighted neural networks
contain subnetworks which achieve impressive performance without ever training
the weight values....
Maybe every paper abstract should have a mandatory field of what the limitations of the proposed approach are. That way some of the science miscommunications and hypes could maybe be avoided.— Sebastian Risi (@risi1979) October 28, 2019 The media...
We present a new, fast and flexible pipeline for indoor scene synthesis that
is based on deep convolutional generative models. Our method operates on a
top-down image-based representation, and inserts objects iteratively into the
scene by predicting...
Ankur HandaGenerating Indoor Scenes via Deep Generative models arxiv.org/abs/1811.12463
Their method takes a top-down image of an scene and inserts objects iteratively into the scene by predicting their category, location, orientation and size with separate neural network modules. pic.twitter.com/Mtv0VjKXHt
We present a deep learning framework for accurate visual correspondences and
demonstrate its effectiveness for both geometric and semantic matching,
spanning across rigid motions to intra-class shape or appearance variations. In
contrast to previous...
Simulation is an anonymous, low-bias source of data where annotation can
often be done automatically; however, for some tasks, current models trained on
synthetic data generalize poorly to real data. The task of 3D human pose
estimation is a...
We present a framework for data-driven robotics that makes use of a large
dataset of recorded robot experience and scales to several tasks using learned
reward functions. We show how to apply this framework to accomplish three
We present 6-PACK, a deep learning approach to category-level 6D object pose
tracking on RGB-D data. Our method tracks in real-time novel object instances
of known object categories such as bowls, laptops, and mugs. 6-PACK learns to
Teleoperation offers the possibility of imparting robotic systems with
sophisticated reasoning skills, intuition, and creativity to perform tasks.
However, current teleoperation solutions for high degree-of-actuation (DoA),
multi-fingered robots are...
Dexterous multi-fingered hands can provide robots with the ability to
flexibly perform a wide range of manipulation skills. However, many of the more
complex behaviors are also notoriously difficult to control: Performing in-hand
We present two novel solutions for multi-view 3D human pose estimation based
on new learnable triangulation methods that combine 3D information from
multiple 2D views. The first (baseline) solution is a basic differentiable
We use reinforcement learning (RL) to learn dexterous in-hand manipulation
policies which can perform vision-based object reorientation on a physical
Shadow Dexterous Hand. The training is performed in a simulated environment in
which we randomize...