Modeling Grasp Motor Imagery

Date
2016-09-08
Authors
Veres, Matthew
Journal Title
Journal ISSN
Volume Title
Publisher
University of Guelph
Abstract

Humans have an innate ability for performing complex grasping maneuvers, yet transferring this ability to robotics is an extremely daunting task. A primary culprit for this difficulty is that robots will eventually need to operate within unstructured environments, and will require capabilities for learning and generalizing between scenarios. A second issue is that grasping does not follow the classical one-to-one paradigm; a single grasp may be applied to many different objects, and a single object may be grasped in many different ways. In this thesis, we investigate how techniques within the Deep Learning (DL) framework can be leveraged to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. This work explores a paradigm for learning integrated object-action representations, and demonstrates its capacity for capturing and generating multimodal, multi-finger grasp configurations on a simulated grasping dataset.

Description
Keywords
Deep Learning, Robotic grasping, Multimodal grasping, Autoencoders, Conditional variational autoencoders, Joint embeddings
Citation