Main content

Modeling Grasp Motor Imagery

Show simple item record

dc.contributor.advisor Taylor, Graham
dc.contributor.advisor Moussa, Medhat Veres, Matthew 2016-09-08T15:19:04Z 2016-09-08T15:19:04Z 2016-08 2016-09-02 2016-09-08
dc.description.abstract Humans have an innate ability for performing complex grasping maneuvers, yet transferring this ability to robotics is an extremely daunting task. A primary culprit for this difficulty is that robots will eventually need to operate within unstructured environments, and will require capabilities for learning and generalizing between scenarios. A second issue is that grasping does not follow the classical one-to-one paradigm; a single grasp may be applied to many different objects, and a single object may be grasped in many different ways. In this thesis, we investigate how techniques within the Deep Learning (DL) framework can be leveraged to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. This work explores a paradigm for learning integrated object-action representations, and demonstrates its capacity for capturing and generating multimodal, multi-finger grasp configurations on a simulated grasping dataset. en_US
dc.language.iso en en_US
dc.rights Attribution 2.5 Canada *
dc.rights.uri *
dc.subject Deep Learning en_US
dc.subject Robotic grasping en_US
dc.subject Multimodal grasping en_US
dc.subject Autoencoders en_US
dc.subject Conditional variational autoencoders en_US
dc.subject Joint embeddings en_US
dc.title Modeling Grasp Motor Imagery en_US
dc.type Thesis en_US Engineering en_US Master of Applied Science en_US School of Engineering en_US
dc.rights.license All items in the Atrium are protected by copyright with all rights reserved unless otherwise indicated.

Files in this item

Files Size Format View Description
veres_matthew_201609_masc.pdf 5.495Mb PDF View/Open main article

This item appears in the following Collection(s)

Show simple item record

Attribution 2.5 Canada Except where otherwise noted, this item's license is described as Attribution 2.5 Canada